00:00:00.001 Started by upstream project "autotest-nightly" build number 4288 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3651 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.172 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.173 The recommended git tool is: git 00:00:00.173 using credential 00000000-0000-0000-0000-000000000002 00:00:00.177 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.207 Fetching changes from the remote Git repository 00:00:00.212 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.249 Using shallow fetch with depth 1 00:00:00.249 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.249 > git --version # timeout=10 00:00:00.281 > git --version # 'git version 2.39.2' 00:00:00.281 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.300 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.300 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.454 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.466 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.478 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.478 > git config core.sparsecheckout # timeout=10 00:00:05.489 > git read-tree -mu HEAD # timeout=10 00:00:05.505 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.525 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.525 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.609 [Pipeline] Start of Pipeline 00:00:05.619 [Pipeline] library 00:00:05.619 Loading library shm_lib@master 00:00:05.620 Library shm_lib@master is cached. Copying from home. 00:00:05.634 [Pipeline] node 00:00:05.645 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.646 [Pipeline] { 00:00:05.653 [Pipeline] catchError 00:00:05.654 [Pipeline] { 00:00:05.666 [Pipeline] wrap 00:00:05.675 [Pipeline] { 00:00:05.683 [Pipeline] stage 00:00:05.685 [Pipeline] { (Prologue) 00:00:05.701 [Pipeline] echo 00:00:05.702 Node: VM-host-SM38 00:00:05.708 [Pipeline] cleanWs 00:00:05.719 [WS-CLEANUP] Deleting project workspace... 00:00:05.719 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.726 [WS-CLEANUP] done 00:00:05.935 [Pipeline] setCustomBuildProperty 00:00:06.019 [Pipeline] httpRequest 00:00:06.381 [Pipeline] echo 00:00:06.384 Sorcerer 10.211.164.20 is alive 00:00:06.396 [Pipeline] retry 00:00:06.398 [Pipeline] { 00:00:06.415 [Pipeline] httpRequest 00:00:06.421 HttpMethod: GET 00:00:06.423 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.423 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.424 Response Code: HTTP/1.1 200 OK 00:00:06.424 Success: Status code 200 is in the accepted range: 200,404 00:00:06.424 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.085 [Pipeline] } 00:00:07.098 [Pipeline] // retry 00:00:07.105 [Pipeline] sh 00:00:07.397 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.416 [Pipeline] httpRequest 00:00:08.444 [Pipeline] echo 00:00:08.445 Sorcerer 10.211.164.20 is alive 00:00:08.455 [Pipeline] retry 00:00:08.456 [Pipeline] { 00:00:08.469 [Pipeline] httpRequest 00:00:08.475 HttpMethod: GET 00:00:08.476 URL: http://10.211.164.20/packages/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:08.476 Sending request to url: http://10.211.164.20/packages/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:08.489 Response Code: HTTP/1.1 200 OK 00:00:08.490 Success: Status code 200 is in the accepted range: 200,404 00:00:08.491 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:01:17.802 [Pipeline] } 00:01:17.821 [Pipeline] // retry 00:01:17.831 [Pipeline] sh 00:01:18.119 + tar --no-same-owner -xf spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:01:20.672 [Pipeline] sh 00:01:20.958 + git -C spdk log --oneline -n5 00:01:20.958 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:01:20.958 c0b2ac5c9 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:01:20.958 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:01:20.958 79daf868a dif: Add SPDK_DIF_FLAGS_NVME_PRACT for dif_generate/verify_copy() 00:01:20.958 431baf1b5 dif: Insert abstraction into dif_generate/verify_copy() for NVMe PRACT 00:01:20.978 [Pipeline] writeFile 00:01:20.992 [Pipeline] sh 00:01:21.278 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:21.291 [Pipeline] sh 00:01:21.576 + cat autorun-spdk.conf 00:01:21.576 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.576 SPDK_TEST_NVME=1 00:01:21.576 SPDK_TEST_FTL=1 00:01:21.576 SPDK_TEST_ISAL=1 00:01:21.576 SPDK_RUN_ASAN=1 00:01:21.576 SPDK_RUN_UBSAN=1 00:01:21.576 SPDK_TEST_XNVME=1 00:01:21.576 SPDK_TEST_NVME_FDP=1 00:01:21.576 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.585 RUN_NIGHTLY=1 00:01:21.587 [Pipeline] } 00:01:21.601 [Pipeline] // stage 00:01:21.616 [Pipeline] stage 00:01:21.619 [Pipeline] { (Run VM) 00:01:21.632 [Pipeline] sh 00:01:21.955 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:21.955 + echo 'Start stage prepare_nvme.sh' 00:01:21.955 Start stage prepare_nvme.sh 00:01:21.955 + [[ -n 9 ]] 00:01:21.955 + disk_prefix=ex9 00:01:21.955 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:21.955 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:21.955 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:21.955 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:21.955 ++ SPDK_TEST_NVME=1 00:01:21.955 ++ SPDK_TEST_FTL=1 00:01:21.955 ++ SPDK_TEST_ISAL=1 00:01:21.955 ++ SPDK_RUN_ASAN=1 00:01:21.955 ++ SPDK_RUN_UBSAN=1 00:01:21.955 ++ SPDK_TEST_XNVME=1 00:01:21.955 ++ SPDK_TEST_NVME_FDP=1 00:01:21.955 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:21.955 ++ RUN_NIGHTLY=1 00:01:21.955 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:21.955 + nvme_files=() 00:01:21.955 + declare -A nvme_files 00:01:21.955 + backend_dir=/var/lib/libvirt/images/backends 00:01:21.955 + nvme_files['nvme.img']=5G 00:01:21.955 + nvme_files['nvme-cmb.img']=5G 00:01:21.955 + nvme_files['nvme-multi0.img']=4G 00:01:21.955 + nvme_files['nvme-multi1.img']=4G 00:01:21.955 + nvme_files['nvme-multi2.img']=4G 00:01:21.955 + nvme_files['nvme-openstack.img']=8G 00:01:21.955 + nvme_files['nvme-zns.img']=5G 00:01:21.955 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:21.955 + (( SPDK_TEST_FTL == 1 )) 00:01:21.956 + nvme_files["nvme-ftl.img"]=6G 00:01:21.956 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:21.956 + nvme_files["nvme-fdp.img"]=1G 00:01:21.956 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:21.956 + for nvme in "${!nvme_files[@]}" 00:01:21.956 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi2.img -s 4G 00:01:22.271 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:22.271 + for nvme in "${!nvme_files[@]}" 00:01:22.271 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-ftl.img -s 6G 00:01:22.840 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:22.840 + for nvme in "${!nvme_files[@]}" 00:01:22.840 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-cmb.img -s 5G 00:01:23.100 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:23.100 + for nvme in "${!nvme_files[@]}" 00:01:23.100 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-openstack.img -s 8G 00:01:23.100 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:23.100 + for nvme in "${!nvme_files[@]}" 00:01:23.100 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-zns.img -s 5G 00:01:23.101 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:23.101 + for nvme in "${!nvme_files[@]}" 00:01:23.101 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi1.img -s 4G 00:01:23.361 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:23.361 + for nvme in "${!nvme_files[@]}" 00:01:23.361 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi0.img -s 4G 00:01:23.930 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:23.930 + for nvme in "${!nvme_files[@]}" 00:01:23.930 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-fdp.img -s 1G 00:01:24.191 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:24.191 + for nvme in "${!nvme_files[@]}" 00:01:24.191 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme.img -s 5G 00:01:24.760 Formatting '/var/lib/libvirt/images/backends/ex9-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:24.760 ++ sudo grep -rl ex9-nvme.img /etc/libvirt/qemu 00:01:24.760 + echo 'End stage prepare_nvme.sh' 00:01:24.760 End stage prepare_nvme.sh 00:01:24.773 [Pipeline] sh 00:01:25.057 + DISTRO=fedora39 00:01:25.057 + CPUS=10 00:01:25.057 + RAM=12288 00:01:25.057 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:25.057 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex9-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex9-nvme.img -b /var/lib/libvirt/images/backends/ex9-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex9-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:25.057 00:01:25.057 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:25.057 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:25.057 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:25.057 HELP=0 00:01:25.057 DRY_RUN=0 00:01:25.057 NVME_FILE=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,/var/lib/libvirt/images/backends/ex9-nvme.img,/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,/var/lib/libvirt/images/backends/ex9-nvme-fdp.img, 00:01:25.057 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:25.057 NVME_AUTO_CREATE=0 00:01:25.057 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,, 00:01:25.057 NVME_CMB=,,,, 00:01:25.057 NVME_PMR=,,,, 00:01:25.057 NVME_ZNS=,,,, 00:01:25.057 NVME_MS=true,,,, 00:01:25.057 NVME_FDP=,,,on, 00:01:25.057 SPDK_VAGRANT_DISTRO=fedora39 00:01:25.057 SPDK_VAGRANT_VMCPU=10 00:01:25.057 SPDK_VAGRANT_VMRAM=12288 00:01:25.057 SPDK_VAGRANT_PROVIDER=libvirt 00:01:25.057 SPDK_VAGRANT_HTTP_PROXY= 00:01:25.057 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:25.057 SPDK_OPENSTACK_NETWORK=0 00:01:25.057 VAGRANT_PACKAGE_BOX=0 00:01:25.057 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:25.057 FORCE_DISTRO=true 00:01:25.057 VAGRANT_BOX_VERSION= 00:01:25.057 EXTRA_VAGRANTFILES= 00:01:25.057 NIC_MODEL=e1000 00:01:25.057 00:01:25.057 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:25.057 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:27.606 Bringing machine 'default' up with 'libvirt' provider... 00:01:28.178 ==> default: Creating image (snapshot of base box volume). 00:01:28.178 ==> default: Creating domain with the following settings... 00:01:28.178 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732152431_f9fce0a08d569cd7b8df 00:01:28.178 ==> default: -- Domain type: kvm 00:01:28.178 ==> default: -- Cpus: 10 00:01:28.178 ==> default: -- Feature: acpi 00:01:28.178 ==> default: -- Feature: apic 00:01:28.178 ==> default: -- Feature: pae 00:01:28.178 ==> default: -- Memory: 12288M 00:01:28.178 ==> default: -- Memory Backing: hugepages: 00:01:28.178 ==> default: -- Management MAC: 00:01:28.178 ==> default: -- Loader: 00:01:28.178 ==> default: -- Nvram: 00:01:28.178 ==> default: -- Base box: spdk/fedora39 00:01:28.178 ==> default: -- Storage pool: default 00:01:28.178 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732152431_f9fce0a08d569cd7b8df.img (20G) 00:01:28.178 ==> default: -- Volume Cache: default 00:01:28.178 ==> default: -- Kernel: 00:01:28.178 ==> default: -- Initrd: 00:01:28.178 ==> default: -- Graphics Type: vnc 00:01:28.178 ==> default: -- Graphics Port: -1 00:01:28.178 ==> default: -- Graphics IP: 127.0.0.1 00:01:28.178 ==> default: -- Graphics Password: Not defined 00:01:28.178 ==> default: -- Video Type: cirrus 00:01:28.178 ==> default: -- Video VRAM: 9216 00:01:28.178 ==> default: -- Sound Type: 00:01:28.178 ==> default: -- Keymap: en-us 00:01:28.178 ==> default: -- TPM Path: 00:01:28.178 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:28.178 ==> default: -- Command line args: 00:01:28.178 ==> default: -> value=-device, 00:01:28.178 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:28.178 ==> default: -> value=-drive, 00:01:28.178 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:28.178 ==> default: -> value=-device, 00:01:28.178 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:28.178 ==> default: -> value=-device, 00:01:28.178 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:28.178 ==> default: -> value=-drive, 00:01:28.178 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme.img,if=none,id=nvme-1-drive0, 00:01:28.178 ==> default: -> value=-device, 00:01:28.178 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.178 ==> default: -> value=-device, 00:01:28.178 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:28.178 ==> default: -> value=-drive, 00:01:28.178 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:28.178 ==> default: -> value=-device, 00:01:28.178 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.178 ==> default: -> value=-drive, 00:01:28.178 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:28.178 ==> default: -> value=-device, 00:01:28.178 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.178 ==> default: -> value=-drive, 00:01:28.178 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:28.178 ==> default: -> value=-device, 00:01:28.178 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.178 ==> default: -> value=-device, 00:01:28.178 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:28.178 ==> default: -> value=-device, 00:01:28.178 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:28.178 ==> default: -> value=-drive, 00:01:28.178 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:28.178 ==> default: -> value=-device, 00:01:28.179 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.440 ==> default: Creating shared folders metadata... 00:01:28.440 ==> default: Starting domain. 00:01:30.987 ==> default: Waiting for domain to get an IP address... 00:01:49.184 ==> default: Waiting for SSH to become available... 00:01:50.127 ==> default: Configuring and enabling network interfaces... 00:01:54.348 default: SSH address: 192.168.121.143:22 00:01:54.348 default: SSH username: vagrant 00:01:54.348 default: SSH auth method: private key 00:01:56.264 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:04.414 ==> default: Mounting SSHFS shared folder... 00:02:06.346 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:06.346 ==> default: Checking Mount.. 00:02:07.734 ==> default: Folder Successfully Mounted! 00:02:07.734 00:02:07.734 SUCCESS! 00:02:07.734 00:02:07.734 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:07.734 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:07.734 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:07.734 00:02:07.745 [Pipeline] } 00:02:07.760 [Pipeline] // stage 00:02:07.770 [Pipeline] dir 00:02:07.771 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:07.773 [Pipeline] { 00:02:07.786 [Pipeline] catchError 00:02:07.788 [Pipeline] { 00:02:07.801 [Pipeline] sh 00:02:08.085 + vagrant ssh-config --host vagrant 00:02:08.085 + sed -ne '/^Host/,$p' 00:02:08.085 + tee ssh_conf 00:02:10.635 Host vagrant 00:02:10.635 HostName 192.168.121.143 00:02:10.635 User vagrant 00:02:10.635 Port 22 00:02:10.635 UserKnownHostsFile /dev/null 00:02:10.635 StrictHostKeyChecking no 00:02:10.635 PasswordAuthentication no 00:02:10.635 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:10.635 IdentitiesOnly yes 00:02:10.635 LogLevel FATAL 00:02:10.635 ForwardAgent yes 00:02:10.635 ForwardX11 yes 00:02:10.635 00:02:10.652 [Pipeline] withEnv 00:02:10.655 [Pipeline] { 00:02:10.671 [Pipeline] sh 00:02:10.953 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:10.953 source /etc/os-release 00:02:10.953 [[ -e /image.version ]] && img=$(< /image.version) 00:02:10.953 # Minimal, systemd-like check. 00:02:10.954 if [[ -e /.dockerenv ]]; then 00:02:10.954 # Clear garbage from the node'\''s name: 00:02:10.954 # agt-er_autotest_547-896 -> autotest_547-896 00:02:10.954 # $HOSTNAME is the actual container id 00:02:10.954 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:10.954 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:10.954 # We can assume this is a mount from a host where container is running, 00:02:10.954 # so fetch its hostname to easily identify the target swarm worker. 00:02:10.954 container="$(< /etc/hostname) ($agent)" 00:02:10.954 else 00:02:10.954 # Fallback 00:02:10.954 container=$agent 00:02:10.954 fi 00:02:10.954 fi 00:02:10.954 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:10.954 ' 00:02:11.227 [Pipeline] } 00:02:11.243 [Pipeline] // withEnv 00:02:11.252 [Pipeline] setCustomBuildProperty 00:02:11.268 [Pipeline] stage 00:02:11.270 [Pipeline] { (Tests) 00:02:11.289 [Pipeline] sh 00:02:11.575 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:11.852 [Pipeline] sh 00:02:12.136 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:12.415 [Pipeline] timeout 00:02:12.416 Timeout set to expire in 50 min 00:02:12.418 [Pipeline] { 00:02:12.434 [Pipeline] sh 00:02:12.719 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:13.312 HEAD is now at 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:02:13.327 [Pipeline] sh 00:02:13.615 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:13.894 [Pipeline] sh 00:02:14.182 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:14.464 [Pipeline] sh 00:02:14.747 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:14.747 ++ readlink -f spdk_repo 00:02:14.747 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:14.747 + [[ -n /home/vagrant/spdk_repo ]] 00:02:14.747 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:14.747 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:14.747 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:14.747 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:14.747 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:14.747 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:14.747 + cd /home/vagrant/spdk_repo 00:02:14.747 + source /etc/os-release 00:02:14.747 ++ NAME='Fedora Linux' 00:02:14.747 ++ VERSION='39 (Cloud Edition)' 00:02:14.747 ++ ID=fedora 00:02:14.747 ++ VERSION_ID=39 00:02:14.747 ++ VERSION_CODENAME= 00:02:14.747 ++ PLATFORM_ID=platform:f39 00:02:14.747 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:14.747 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:14.747 ++ LOGO=fedora-logo-icon 00:02:14.747 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:14.747 ++ HOME_URL=https://fedoraproject.org/ 00:02:14.747 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:14.747 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:14.747 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:14.747 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:14.747 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:14.747 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:14.747 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:14.747 ++ SUPPORT_END=2024-11-12 00:02:14.747 ++ VARIANT='Cloud Edition' 00:02:14.747 ++ VARIANT_ID=cloud 00:02:14.747 + uname -a 00:02:14.747 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:14.747 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:15.320 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:15.582 Hugepages 00:02:15.582 node hugesize free / total 00:02:15.582 node0 1048576kB 0 / 0 00:02:15.582 node0 2048kB 0 / 0 00:02:15.582 00:02:15.582 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:15.582 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:15.582 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:15.582 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:15.582 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:15.582 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:15.582 + rm -f /tmp/spdk-ld-path 00:02:15.582 + source autorun-spdk.conf 00:02:15.582 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.582 ++ SPDK_TEST_NVME=1 00:02:15.582 ++ SPDK_TEST_FTL=1 00:02:15.582 ++ SPDK_TEST_ISAL=1 00:02:15.582 ++ SPDK_RUN_ASAN=1 00:02:15.582 ++ SPDK_RUN_UBSAN=1 00:02:15.582 ++ SPDK_TEST_XNVME=1 00:02:15.582 ++ SPDK_TEST_NVME_FDP=1 00:02:15.582 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:15.582 ++ RUN_NIGHTLY=1 00:02:15.582 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:15.582 + [[ -n '' ]] 00:02:15.582 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:15.582 + for M in /var/spdk/build-*-manifest.txt 00:02:15.582 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:15.582 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:15.582 + for M in /var/spdk/build-*-manifest.txt 00:02:15.582 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:15.582 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:15.582 + for M in /var/spdk/build-*-manifest.txt 00:02:15.582 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:15.582 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:15.582 ++ uname 00:02:15.582 + [[ Linux == \L\i\n\u\x ]] 00:02:15.582 + sudo dmesg -T 00:02:15.582 + sudo dmesg --clear 00:02:15.844 + dmesg_pid=5041 00:02:15.844 + [[ Fedora Linux == FreeBSD ]] 00:02:15.844 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:15.844 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:15.844 + sudo dmesg -Tw 00:02:15.844 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:15.844 + [[ -x /usr/src/fio-static/fio ]] 00:02:15.844 + export FIO_BIN=/usr/src/fio-static/fio 00:02:15.844 + FIO_BIN=/usr/src/fio-static/fio 00:02:15.844 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:15.844 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:15.844 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:15.844 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:15.844 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:15.844 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:15.844 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:15.844 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:15.844 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:15.844 01:27:59 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:15.844 01:27:59 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:15.844 01:27:59 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.844 01:27:59 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:15.844 01:27:59 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:15.844 01:27:59 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:15.844 01:27:59 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:15.844 01:27:59 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:15.844 01:27:59 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:15.844 01:27:59 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:15.844 01:27:59 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:15.844 01:27:59 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:02:15.844 01:27:59 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:15.844 01:27:59 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:15.844 01:27:59 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:15.844 01:27:59 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:15.844 01:27:59 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:15.844 01:27:59 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:15.844 01:27:59 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:15.844 01:27:59 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:15.844 01:27:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.844 01:27:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.844 01:27:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.844 01:27:59 -- paths/export.sh@5 -- $ export PATH 00:02:15.844 01:27:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.844 01:27:59 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:15.844 01:27:59 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:15.844 01:27:59 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732152479.XXXXXX 00:02:15.844 01:27:59 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732152479.R9QXpM 00:02:15.844 01:27:59 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:15.844 01:27:59 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:15.844 01:27:59 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:15.845 01:27:59 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:15.845 01:27:59 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:15.845 01:27:59 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:15.845 01:27:59 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:15.845 01:27:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.845 01:27:59 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:15.845 01:27:59 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:15.845 01:27:59 -- pm/common@17 -- $ local monitor 00:02:15.845 01:27:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.845 01:27:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.845 01:27:59 -- pm/common@25 -- $ sleep 1 00:02:15.845 01:27:59 -- pm/common@21 -- $ date +%s 00:02:15.845 01:27:59 -- pm/common@21 -- $ date +%s 00:02:15.845 01:27:59 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732152479 00:02:15.845 01:27:59 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732152479 00:02:15.845 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732152479_collect-cpu-load.pm.log 00:02:15.845 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732152479_collect-vmstat.pm.log 00:02:16.789 01:28:00 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:16.789 01:28:00 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:16.789 01:28:00 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:16.789 01:28:00 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:16.789 01:28:00 -- spdk/autobuild.sh@16 -- $ date -u 00:02:16.789 Thu Nov 21 01:28:00 AM UTC 2024 00:02:16.789 01:28:00 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:16.789 v25.01-pre-219-g557f022f6 00:02:16.789 01:28:00 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:16.789 01:28:00 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:16.789 01:28:00 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:16.789 01:28:00 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:16.789 01:28:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.789 ************************************ 00:02:16.789 START TEST asan 00:02:16.789 ************************************ 00:02:16.789 using asan 00:02:16.789 ************************************ 00:02:16.789 END TEST asan 00:02:16.789 ************************************ 00:02:16.789 01:28:00 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:16.789 00:02:16.789 real 0m0.000s 00:02:16.789 user 0m0.000s 00:02:16.789 sys 0m0.000s 00:02:16.789 01:28:00 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:16.789 01:28:00 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:17.050 01:28:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:17.050 01:28:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:17.050 01:28:00 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:17.050 01:28:00 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:17.050 01:28:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:17.050 ************************************ 00:02:17.050 START TEST ubsan 00:02:17.050 ************************************ 00:02:17.050 using ubsan 00:02:17.050 01:28:00 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:17.050 00:02:17.050 real 0m0.000s 00:02:17.050 user 0m0.000s 00:02:17.050 sys 0m0.000s 00:02:17.050 01:28:00 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:17.050 01:28:00 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:17.050 ************************************ 00:02:17.050 END TEST ubsan 00:02:17.050 ************************************ 00:02:17.050 01:28:00 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:17.050 01:28:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:17.050 01:28:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:17.050 01:28:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:17.050 01:28:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:17.050 01:28:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:17.050 01:28:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:17.050 01:28:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:17.050 01:28:00 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:17.050 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:17.051 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:17.311 Using 'verbs' RDMA provider 00:02:28.245 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:38.216 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:38.216 Creating mk/config.mk...done. 00:02:38.216 Creating mk/cc.flags.mk...done. 00:02:38.216 Type 'make' to build. 00:02:38.216 01:28:21 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:38.216 01:28:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:38.216 01:28:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:38.216 01:28:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:38.216 ************************************ 00:02:38.216 START TEST make 00:02:38.216 ************************************ 00:02:38.216 01:28:21 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:38.474 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:38.474 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:38.474 meson setup builddir \ 00:02:38.475 -Dwith-libaio=enabled \ 00:02:38.475 -Dwith-liburing=enabled \ 00:02:38.475 -Dwith-libvfn=disabled \ 00:02:38.475 -Dwith-spdk=disabled \ 00:02:38.475 -Dexamples=false \ 00:02:38.475 -Dtests=false \ 00:02:38.475 -Dtools=false && \ 00:02:38.475 meson compile -C builddir && \ 00:02:38.475 cd -) 00:02:38.475 make[1]: Nothing to be done for 'all'. 00:02:40.377 The Meson build system 00:02:40.377 Version: 1.5.0 00:02:40.377 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:40.377 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:40.377 Build type: native build 00:02:40.378 Project name: xnvme 00:02:40.378 Project version: 0.7.5 00:02:40.378 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:40.378 C linker for the host machine: cc ld.bfd 2.40-14 00:02:40.378 Host machine cpu family: x86_64 00:02:40.378 Host machine cpu: x86_64 00:02:40.378 Message: host_machine.system: linux 00:02:40.378 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:40.378 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:40.378 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:40.378 Run-time dependency threads found: YES 00:02:40.378 Has header "setupapi.h" : NO 00:02:40.378 Has header "linux/blkzoned.h" : YES 00:02:40.378 Has header "linux/blkzoned.h" : YES (cached) 00:02:40.378 Has header "libaio.h" : YES 00:02:40.378 Library aio found: YES 00:02:40.378 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:40.378 Run-time dependency liburing found: YES 2.2 00:02:40.378 Dependency libvfn skipped: feature with-libvfn disabled 00:02:40.378 Found CMake: /usr/bin/cmake (3.27.7) 00:02:40.378 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:40.378 Subproject spdk : skipped: feature with-spdk disabled 00:02:40.378 Run-time dependency appleframeworks found: NO (tried framework) 00:02:40.378 Run-time dependency appleframeworks found: NO (tried framework) 00:02:40.378 Library rt found: YES 00:02:40.378 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:40.378 Configuring xnvme_config.h using configuration 00:02:40.378 Configuring xnvme.spec using configuration 00:02:40.378 Run-time dependency bash-completion found: YES 2.11 00:02:40.378 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:40.378 Program cp found: YES (/usr/bin/cp) 00:02:40.378 Build targets in project: 3 00:02:40.378 00:02:40.378 xnvme 0.7.5 00:02:40.378 00:02:40.378 Subprojects 00:02:40.378 spdk : NO Feature 'with-spdk' disabled 00:02:40.378 00:02:40.378 User defined options 00:02:40.378 examples : false 00:02:40.378 tests : false 00:02:40.378 tools : false 00:02:40.378 with-libaio : enabled 00:02:40.378 with-liburing: enabled 00:02:40.378 with-libvfn : disabled 00:02:40.378 with-spdk : disabled 00:02:40.378 00:02:40.378 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:40.637 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:40.637 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:40.637 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:40.637 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:40.637 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:40.637 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:40.637 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:40.637 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:40.895 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:40.895 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:40.895 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:40.895 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:40.895 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:40.895 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:40.895 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:40.895 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:40.895 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:40.895 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:40.895 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:40.895 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:40.895 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:40.895 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:40.895 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:40.895 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:40.895 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:40.895 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:40.895 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:40.895 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:40.895 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:40.895 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:40.895 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:40.895 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:40.895 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:40.895 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:40.895 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:40.895 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:40.895 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:40.895 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:40.895 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:40.895 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:41.154 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:41.154 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:41.154 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:41.154 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:41.154 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:41.154 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:41.154 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:41.154 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:41.154 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:41.154 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:41.154 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:41.154 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:41.154 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:41.154 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:41.154 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:41.154 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:41.154 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:41.154 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:41.154 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:41.154 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:41.154 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:41.154 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:41.154 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:41.154 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:41.154 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:41.154 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:41.154 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:41.154 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:41.413 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:41.413 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:41.413 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:41.413 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:41.413 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:41.413 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:41.671 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:41.671 [75/76] Linking static target lib/libxnvme.a 00:02:41.671 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:41.671 INFO: autodetecting backend as ninja 00:02:41.671 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:41.930 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:48.566 The Meson build system 00:02:48.566 Version: 1.5.0 00:02:48.566 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:48.566 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:48.566 Build type: native build 00:02:48.566 Program cat found: YES (/usr/bin/cat) 00:02:48.566 Project name: DPDK 00:02:48.566 Project version: 24.03.0 00:02:48.566 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:48.566 C linker for the host machine: cc ld.bfd 2.40-14 00:02:48.566 Host machine cpu family: x86_64 00:02:48.566 Host machine cpu: x86_64 00:02:48.566 Message: ## Building in Developer Mode ## 00:02:48.566 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:48.566 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:48.566 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:48.566 Program python3 found: YES (/usr/bin/python3) 00:02:48.566 Program cat found: YES (/usr/bin/cat) 00:02:48.566 Compiler for C supports arguments -march=native: YES 00:02:48.566 Checking for size of "void *" : 8 00:02:48.566 Checking for size of "void *" : 8 (cached) 00:02:48.566 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:48.566 Library m found: YES 00:02:48.566 Library numa found: YES 00:02:48.566 Has header "numaif.h" : YES 00:02:48.566 Library fdt found: NO 00:02:48.566 Library execinfo found: NO 00:02:48.566 Has header "execinfo.h" : YES 00:02:48.566 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:48.566 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:48.566 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:48.566 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:48.566 Run-time dependency openssl found: YES 3.1.1 00:02:48.566 Run-time dependency libpcap found: YES 1.10.4 00:02:48.566 Has header "pcap.h" with dependency libpcap: YES 00:02:48.566 Compiler for C supports arguments -Wcast-qual: YES 00:02:48.566 Compiler for C supports arguments -Wdeprecated: YES 00:02:48.566 Compiler for C supports arguments -Wformat: YES 00:02:48.566 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:48.566 Compiler for C supports arguments -Wformat-security: NO 00:02:48.566 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:48.566 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:48.566 Compiler for C supports arguments -Wnested-externs: YES 00:02:48.566 Compiler for C supports arguments -Wold-style-definition: YES 00:02:48.566 Compiler for C supports arguments -Wpointer-arith: YES 00:02:48.566 Compiler for C supports arguments -Wsign-compare: YES 00:02:48.566 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:48.566 Compiler for C supports arguments -Wundef: YES 00:02:48.566 Compiler for C supports arguments -Wwrite-strings: YES 00:02:48.566 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:48.566 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:48.566 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:48.566 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:48.566 Program objdump found: YES (/usr/bin/objdump) 00:02:48.566 Compiler for C supports arguments -mavx512f: YES 00:02:48.566 Checking if "AVX512 checking" compiles: YES 00:02:48.566 Fetching value of define "__SSE4_2__" : 1 00:02:48.566 Fetching value of define "__AES__" : 1 00:02:48.566 Fetching value of define "__AVX__" : 1 00:02:48.566 Fetching value of define "__AVX2__" : 1 00:02:48.566 Fetching value of define "__AVX512BW__" : 1 00:02:48.566 Fetching value of define "__AVX512CD__" : 1 00:02:48.566 Fetching value of define "__AVX512DQ__" : 1 00:02:48.566 Fetching value of define "__AVX512F__" : 1 00:02:48.566 Fetching value of define "__AVX512VL__" : 1 00:02:48.566 Fetching value of define "__PCLMUL__" : 1 00:02:48.566 Fetching value of define "__RDRND__" : 1 00:02:48.566 Fetching value of define "__RDSEED__" : 1 00:02:48.566 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:48.566 Fetching value of define "__znver1__" : (undefined) 00:02:48.566 Fetching value of define "__znver2__" : (undefined) 00:02:48.566 Fetching value of define "__znver3__" : (undefined) 00:02:48.566 Fetching value of define "__znver4__" : (undefined) 00:02:48.566 Library asan found: YES 00:02:48.566 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:48.566 Message: lib/log: Defining dependency "log" 00:02:48.566 Message: lib/kvargs: Defining dependency "kvargs" 00:02:48.566 Message: lib/telemetry: Defining dependency "telemetry" 00:02:48.566 Library rt found: YES 00:02:48.566 Checking for function "getentropy" : NO 00:02:48.566 Message: lib/eal: Defining dependency "eal" 00:02:48.566 Message: lib/ring: Defining dependency "ring" 00:02:48.566 Message: lib/rcu: Defining dependency "rcu" 00:02:48.566 Message: lib/mempool: Defining dependency "mempool" 00:02:48.566 Message: lib/mbuf: Defining dependency "mbuf" 00:02:48.566 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:48.566 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:48.566 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:48.566 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:48.566 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:48.566 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:48.566 Compiler for C supports arguments -mpclmul: YES 00:02:48.566 Compiler for C supports arguments -maes: YES 00:02:48.566 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:48.566 Compiler for C supports arguments -mavx512bw: YES 00:02:48.566 Compiler for C supports arguments -mavx512dq: YES 00:02:48.566 Compiler for C supports arguments -mavx512vl: YES 00:02:48.566 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:48.566 Compiler for C supports arguments -mavx2: YES 00:02:48.566 Compiler for C supports arguments -mavx: YES 00:02:48.566 Message: lib/net: Defining dependency "net" 00:02:48.566 Message: lib/meter: Defining dependency "meter" 00:02:48.566 Message: lib/ethdev: Defining dependency "ethdev" 00:02:48.566 Message: lib/pci: Defining dependency "pci" 00:02:48.566 Message: lib/cmdline: Defining dependency "cmdline" 00:02:48.566 Message: lib/hash: Defining dependency "hash" 00:02:48.566 Message: lib/timer: Defining dependency "timer" 00:02:48.566 Message: lib/compressdev: Defining dependency "compressdev" 00:02:48.566 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:48.566 Message: lib/dmadev: Defining dependency "dmadev" 00:02:48.566 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:48.566 Message: lib/power: Defining dependency "power" 00:02:48.566 Message: lib/reorder: Defining dependency "reorder" 00:02:48.566 Message: lib/security: Defining dependency "security" 00:02:48.566 Has header "linux/userfaultfd.h" : YES 00:02:48.566 Has header "linux/vduse.h" : YES 00:02:48.566 Message: lib/vhost: Defining dependency "vhost" 00:02:48.566 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:48.566 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:48.566 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:48.567 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:48.567 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:48.567 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:48.567 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:48.567 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:48.567 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:48.567 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:48.567 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:48.567 Configuring doxy-api-html.conf using configuration 00:02:48.567 Configuring doxy-api-man.conf using configuration 00:02:48.567 Program mandb found: YES (/usr/bin/mandb) 00:02:48.567 Program sphinx-build found: NO 00:02:48.567 Configuring rte_build_config.h using configuration 00:02:48.567 Message: 00:02:48.567 ================= 00:02:48.567 Applications Enabled 00:02:48.567 ================= 00:02:48.567 00:02:48.567 apps: 00:02:48.567 00:02:48.567 00:02:48.567 Message: 00:02:48.567 ================= 00:02:48.567 Libraries Enabled 00:02:48.567 ================= 00:02:48.567 00:02:48.567 libs: 00:02:48.567 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:48.567 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:48.567 cryptodev, dmadev, power, reorder, security, vhost, 00:02:48.567 00:02:48.567 Message: 00:02:48.567 =============== 00:02:48.567 Drivers Enabled 00:02:48.567 =============== 00:02:48.567 00:02:48.567 common: 00:02:48.567 00:02:48.567 bus: 00:02:48.567 pci, vdev, 00:02:48.567 mempool: 00:02:48.567 ring, 00:02:48.567 dma: 00:02:48.567 00:02:48.567 net: 00:02:48.567 00:02:48.567 crypto: 00:02:48.567 00:02:48.567 compress: 00:02:48.567 00:02:48.567 vdpa: 00:02:48.567 00:02:48.567 00:02:48.567 Message: 00:02:48.567 ================= 00:02:48.567 Content Skipped 00:02:48.567 ================= 00:02:48.567 00:02:48.567 apps: 00:02:48.567 dumpcap: explicitly disabled via build config 00:02:48.567 graph: explicitly disabled via build config 00:02:48.567 pdump: explicitly disabled via build config 00:02:48.567 proc-info: explicitly disabled via build config 00:02:48.567 test-acl: explicitly disabled via build config 00:02:48.567 test-bbdev: explicitly disabled via build config 00:02:48.567 test-cmdline: explicitly disabled via build config 00:02:48.567 test-compress-perf: explicitly disabled via build config 00:02:48.567 test-crypto-perf: explicitly disabled via build config 00:02:48.567 test-dma-perf: explicitly disabled via build config 00:02:48.567 test-eventdev: explicitly disabled via build config 00:02:48.567 test-fib: explicitly disabled via build config 00:02:48.567 test-flow-perf: explicitly disabled via build config 00:02:48.567 test-gpudev: explicitly disabled via build config 00:02:48.567 test-mldev: explicitly disabled via build config 00:02:48.567 test-pipeline: explicitly disabled via build config 00:02:48.567 test-pmd: explicitly disabled via build config 00:02:48.567 test-regex: explicitly disabled via build config 00:02:48.567 test-sad: explicitly disabled via build config 00:02:48.567 test-security-perf: explicitly disabled via build config 00:02:48.567 00:02:48.567 libs: 00:02:48.567 argparse: explicitly disabled via build config 00:02:48.567 metrics: explicitly disabled via build config 00:02:48.567 acl: explicitly disabled via build config 00:02:48.567 bbdev: explicitly disabled via build config 00:02:48.567 bitratestats: explicitly disabled via build config 00:02:48.567 bpf: explicitly disabled via build config 00:02:48.567 cfgfile: explicitly disabled via build config 00:02:48.567 distributor: explicitly disabled via build config 00:02:48.567 efd: explicitly disabled via build config 00:02:48.567 eventdev: explicitly disabled via build config 00:02:48.567 dispatcher: explicitly disabled via build config 00:02:48.567 gpudev: explicitly disabled via build config 00:02:48.567 gro: explicitly disabled via build config 00:02:48.567 gso: explicitly disabled via build config 00:02:48.567 ip_frag: explicitly disabled via build config 00:02:48.567 jobstats: explicitly disabled via build config 00:02:48.567 latencystats: explicitly disabled via build config 00:02:48.567 lpm: explicitly disabled via build config 00:02:48.567 member: explicitly disabled via build config 00:02:48.567 pcapng: explicitly disabled via build config 00:02:48.567 rawdev: explicitly disabled via build config 00:02:48.567 regexdev: explicitly disabled via build config 00:02:48.567 mldev: explicitly disabled via build config 00:02:48.567 rib: explicitly disabled via build config 00:02:48.567 sched: explicitly disabled via build config 00:02:48.567 stack: explicitly disabled via build config 00:02:48.567 ipsec: explicitly disabled via build config 00:02:48.567 pdcp: explicitly disabled via build config 00:02:48.567 fib: explicitly disabled via build config 00:02:48.567 port: explicitly disabled via build config 00:02:48.567 pdump: explicitly disabled via build config 00:02:48.567 table: explicitly disabled via build config 00:02:48.567 pipeline: explicitly disabled via build config 00:02:48.567 graph: explicitly disabled via build config 00:02:48.567 node: explicitly disabled via build config 00:02:48.567 00:02:48.567 drivers: 00:02:48.567 common/cpt: not in enabled drivers build config 00:02:48.567 common/dpaax: not in enabled drivers build config 00:02:48.567 common/iavf: not in enabled drivers build config 00:02:48.567 common/idpf: not in enabled drivers build config 00:02:48.567 common/ionic: not in enabled drivers build config 00:02:48.567 common/mvep: not in enabled drivers build config 00:02:48.567 common/octeontx: not in enabled drivers build config 00:02:48.567 bus/auxiliary: not in enabled drivers build config 00:02:48.567 bus/cdx: not in enabled drivers build config 00:02:48.567 bus/dpaa: not in enabled drivers build config 00:02:48.567 bus/fslmc: not in enabled drivers build config 00:02:48.567 bus/ifpga: not in enabled drivers build config 00:02:48.567 bus/platform: not in enabled drivers build config 00:02:48.567 bus/uacce: not in enabled drivers build config 00:02:48.567 bus/vmbus: not in enabled drivers build config 00:02:48.567 common/cnxk: not in enabled drivers build config 00:02:48.567 common/mlx5: not in enabled drivers build config 00:02:48.567 common/nfp: not in enabled drivers build config 00:02:48.567 common/nitrox: not in enabled drivers build config 00:02:48.567 common/qat: not in enabled drivers build config 00:02:48.567 common/sfc_efx: not in enabled drivers build config 00:02:48.567 mempool/bucket: not in enabled drivers build config 00:02:48.567 mempool/cnxk: not in enabled drivers build config 00:02:48.567 mempool/dpaa: not in enabled drivers build config 00:02:48.567 mempool/dpaa2: not in enabled drivers build config 00:02:48.567 mempool/octeontx: not in enabled drivers build config 00:02:48.567 mempool/stack: not in enabled drivers build config 00:02:48.568 dma/cnxk: not in enabled drivers build config 00:02:48.568 dma/dpaa: not in enabled drivers build config 00:02:48.568 dma/dpaa2: not in enabled drivers build config 00:02:48.568 dma/hisilicon: not in enabled drivers build config 00:02:48.568 dma/idxd: not in enabled drivers build config 00:02:48.568 dma/ioat: not in enabled drivers build config 00:02:48.568 dma/skeleton: not in enabled drivers build config 00:02:48.568 net/af_packet: not in enabled drivers build config 00:02:48.568 net/af_xdp: not in enabled drivers build config 00:02:48.568 net/ark: not in enabled drivers build config 00:02:48.568 net/atlantic: not in enabled drivers build config 00:02:48.568 net/avp: not in enabled drivers build config 00:02:48.568 net/axgbe: not in enabled drivers build config 00:02:48.568 net/bnx2x: not in enabled drivers build config 00:02:48.568 net/bnxt: not in enabled drivers build config 00:02:48.568 net/bonding: not in enabled drivers build config 00:02:48.568 net/cnxk: not in enabled drivers build config 00:02:48.568 net/cpfl: not in enabled drivers build config 00:02:48.568 net/cxgbe: not in enabled drivers build config 00:02:48.568 net/dpaa: not in enabled drivers build config 00:02:48.568 net/dpaa2: not in enabled drivers build config 00:02:48.568 net/e1000: not in enabled drivers build config 00:02:48.568 net/ena: not in enabled drivers build config 00:02:48.568 net/enetc: not in enabled drivers build config 00:02:48.568 net/enetfec: not in enabled drivers build config 00:02:48.568 net/enic: not in enabled drivers build config 00:02:48.568 net/failsafe: not in enabled drivers build config 00:02:48.568 net/fm10k: not in enabled drivers build config 00:02:48.568 net/gve: not in enabled drivers build config 00:02:48.568 net/hinic: not in enabled drivers build config 00:02:48.568 net/hns3: not in enabled drivers build config 00:02:48.568 net/i40e: not in enabled drivers build config 00:02:48.568 net/iavf: not in enabled drivers build config 00:02:48.568 net/ice: not in enabled drivers build config 00:02:48.568 net/idpf: not in enabled drivers build config 00:02:48.568 net/igc: not in enabled drivers build config 00:02:48.568 net/ionic: not in enabled drivers build config 00:02:48.568 net/ipn3ke: not in enabled drivers build config 00:02:48.568 net/ixgbe: not in enabled drivers build config 00:02:48.568 net/mana: not in enabled drivers build config 00:02:48.568 net/memif: not in enabled drivers build config 00:02:48.568 net/mlx4: not in enabled drivers build config 00:02:48.568 net/mlx5: not in enabled drivers build config 00:02:48.568 net/mvneta: not in enabled drivers build config 00:02:48.568 net/mvpp2: not in enabled drivers build config 00:02:48.568 net/netvsc: not in enabled drivers build config 00:02:48.568 net/nfb: not in enabled drivers build config 00:02:48.568 net/nfp: not in enabled drivers build config 00:02:48.568 net/ngbe: not in enabled drivers build config 00:02:48.568 net/null: not in enabled drivers build config 00:02:48.568 net/octeontx: not in enabled drivers build config 00:02:48.568 net/octeon_ep: not in enabled drivers build config 00:02:48.568 net/pcap: not in enabled drivers build config 00:02:48.568 net/pfe: not in enabled drivers build config 00:02:48.568 net/qede: not in enabled drivers build config 00:02:48.568 net/ring: not in enabled drivers build config 00:02:48.568 net/sfc: not in enabled drivers build config 00:02:48.568 net/softnic: not in enabled drivers build config 00:02:48.568 net/tap: not in enabled drivers build config 00:02:48.568 net/thunderx: not in enabled drivers build config 00:02:48.568 net/txgbe: not in enabled drivers build config 00:02:48.568 net/vdev_netvsc: not in enabled drivers build config 00:02:48.568 net/vhost: not in enabled drivers build config 00:02:48.568 net/virtio: not in enabled drivers build config 00:02:48.568 net/vmxnet3: not in enabled drivers build config 00:02:48.568 raw/*: missing internal dependency, "rawdev" 00:02:48.568 crypto/armv8: not in enabled drivers build config 00:02:48.568 crypto/bcmfs: not in enabled drivers build config 00:02:48.568 crypto/caam_jr: not in enabled drivers build config 00:02:48.568 crypto/ccp: not in enabled drivers build config 00:02:48.568 crypto/cnxk: not in enabled drivers build config 00:02:48.568 crypto/dpaa_sec: not in enabled drivers build config 00:02:48.568 crypto/dpaa2_sec: not in enabled drivers build config 00:02:48.568 crypto/ipsec_mb: not in enabled drivers build config 00:02:48.568 crypto/mlx5: not in enabled drivers build config 00:02:48.568 crypto/mvsam: not in enabled drivers build config 00:02:48.568 crypto/nitrox: not in enabled drivers build config 00:02:48.568 crypto/null: not in enabled drivers build config 00:02:48.568 crypto/octeontx: not in enabled drivers build config 00:02:48.568 crypto/openssl: not in enabled drivers build config 00:02:48.568 crypto/scheduler: not in enabled drivers build config 00:02:48.568 crypto/uadk: not in enabled drivers build config 00:02:48.568 crypto/virtio: not in enabled drivers build config 00:02:48.568 compress/isal: not in enabled drivers build config 00:02:48.568 compress/mlx5: not in enabled drivers build config 00:02:48.568 compress/nitrox: not in enabled drivers build config 00:02:48.568 compress/octeontx: not in enabled drivers build config 00:02:48.568 compress/zlib: not in enabled drivers build config 00:02:48.568 regex/*: missing internal dependency, "regexdev" 00:02:48.568 ml/*: missing internal dependency, "mldev" 00:02:48.568 vdpa/ifc: not in enabled drivers build config 00:02:48.568 vdpa/mlx5: not in enabled drivers build config 00:02:48.568 vdpa/nfp: not in enabled drivers build config 00:02:48.568 vdpa/sfc: not in enabled drivers build config 00:02:48.568 event/*: missing internal dependency, "eventdev" 00:02:48.568 baseband/*: missing internal dependency, "bbdev" 00:02:48.568 gpu/*: missing internal dependency, "gpudev" 00:02:48.568 00:02:48.568 00:02:48.568 Build targets in project: 84 00:02:48.568 00:02:48.568 DPDK 24.03.0 00:02:48.568 00:02:48.568 User defined options 00:02:48.568 buildtype : debug 00:02:48.568 default_library : shared 00:02:48.568 libdir : lib 00:02:48.568 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:48.568 b_sanitize : address 00:02:48.568 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:48.568 c_link_args : 00:02:48.568 cpu_instruction_set: native 00:02:48.568 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:48.568 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:48.568 enable_docs : false 00:02:48.568 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:48.568 enable_kmods : false 00:02:48.568 max_lcores : 128 00:02:48.568 tests : false 00:02:48.568 00:02:48.568 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:48.568 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:48.568 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:48.568 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:48.568 [3/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:48.569 [4/267] Linking static target lib/librte_kvargs.a 00:02:48.569 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:48.569 [6/267] Linking static target lib/librte_log.a 00:02:48.828 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:48.828 [8/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.828 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:48.828 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:48.828 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:48.828 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:48.828 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:48.828 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:48.828 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:48.828 [16/267] Linking static target lib/librte_telemetry.a 00:02:48.828 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:48.828 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:49.087 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:49.346 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:49.346 [21/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.346 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:49.346 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:49.346 [24/267] Linking target lib/librte_log.so.24.1 00:02:49.346 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:49.346 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:49.346 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:49.346 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:49.346 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:49.346 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:49.346 [31/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:49.604 [32/267] Linking target lib/librte_kvargs.so.24.1 00:02:49.604 [33/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.604 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:49.604 [35/267] Linking target lib/librte_telemetry.so.24.1 00:02:49.604 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:49.604 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:49.604 [38/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:49.604 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:49.604 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:49.604 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:49.863 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:49.863 [43/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:49.863 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:49.863 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:49.863 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:49.863 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:49.863 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:50.122 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:50.122 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:50.122 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:50.122 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:50.122 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:50.122 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:50.122 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:50.381 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:50.381 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:50.381 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:50.381 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:50.381 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:50.381 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:50.381 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:50.381 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:50.381 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:50.381 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:50.639 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:50.639 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:50.639 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:50.639 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:50.898 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:50.898 [71/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:50.898 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:50.898 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:50.898 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:50.898 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:50.898 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:50.898 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:51.156 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:51.156 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:51.156 [80/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:51.156 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:51.156 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:51.416 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:51.416 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:51.416 [85/267] Linking static target lib/librte_ring.a 00:02:51.416 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:51.416 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:51.416 [88/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:51.416 [89/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:51.675 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:51.675 [91/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:51.675 [92/267] Linking static target lib/librte_eal.a 00:02:51.675 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:51.675 [94/267] Linking static target lib/librte_mempool.a 00:02:51.675 [95/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.675 [96/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:51.675 [97/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:51.675 [98/267] Linking static target lib/librte_rcu.a 00:02:51.934 [99/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:51.934 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:51.934 [101/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:51.934 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:52.192 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:52.192 [104/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:52.192 [105/267] Linking static target lib/librte_meter.a 00:02:52.193 [106/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.193 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:52.193 [108/267] Linking static target lib/librte_net.a 00:02:52.193 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:52.193 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:52.451 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:52.451 [112/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:52.451 [113/267] Linking static target lib/librte_mbuf.a 00:02:52.451 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.451 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.451 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:52.710 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.710 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:52.710 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:52.710 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:52.969 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:52.969 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:53.228 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:53.228 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:53.228 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:53.228 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:53.228 [127/267] Linking static target lib/librte_pci.a 00:02:53.228 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:53.228 [129/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.228 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:53.228 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:53.228 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:53.228 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:53.487 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:53.487 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:53.487 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:53.487 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:53.487 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:53.487 [139/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.487 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:53.487 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:53.487 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:53.487 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:53.487 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:53.487 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:53.487 [146/267] Linking static target lib/librte_cmdline.a 00:02:53.746 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:53.746 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:53.746 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:53.746 [150/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:54.005 [151/267] Linking static target lib/librte_timer.a 00:02:54.005 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:54.005 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:54.005 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:54.005 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:54.005 [156/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:54.005 [157/267] Linking static target lib/librte_ethdev.a 00:02:54.263 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:54.263 [159/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:54.263 [160/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:54.263 [161/267] Linking static target lib/librte_hash.a 00:02:54.263 [162/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:54.522 [163/267] Linking static target lib/librte_compressdev.a 00:02:54.522 [164/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.522 [165/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:54.522 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:54.522 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:54.522 [168/267] Linking static target lib/librte_dmadev.a 00:02:54.781 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:54.781 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:54.781 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:54.781 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.781 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:55.041 [174/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:55.041 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:55.041 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:55.041 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:55.041 [178/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.041 [179/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.299 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:55.299 [181/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.557 [182/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:55.557 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:55.557 [184/267] Linking static target lib/librte_cryptodev.a 00:02:55.557 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:55.557 [186/267] Linking static target lib/librte_power.a 00:02:55.557 [187/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:55.557 [188/267] Linking static target lib/librte_reorder.a 00:02:55.557 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:55.557 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:55.557 [191/267] Linking static target lib/librte_security.a 00:02:55.557 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:55.815 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.074 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:56.074 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.074 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:56.331 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:56.331 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:56.331 [199/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.589 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:56.589 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:56.589 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:56.589 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:56.589 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:56.847 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:56.847 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:56.847 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:56.847 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:56.847 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:57.108 [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:57.108 [211/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.109 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.109 [213/267] Linking static target drivers/librte_bus_vdev.a 00:02:57.109 [214/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:57.109 [215/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:57.109 [216/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:57.109 [217/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.109 [218/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.109 [219/267] Linking static target drivers/librte_bus_pci.a 00:02:57.109 [220/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.389 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:57.389 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.389 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.389 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:57.389 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.389 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.647 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:59.022 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.022 [229/267] Linking target lib/librte_eal.so.24.1 00:02:59.022 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:59.022 [231/267] Linking target lib/librte_ring.so.24.1 00:02:59.022 [232/267] Linking target lib/librte_timer.so.24.1 00:02:59.022 [233/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:59.022 [234/267] Linking target lib/librte_pci.so.24.1 00:02:59.022 [235/267] Linking target lib/librte_meter.so.24.1 00:02:59.022 [236/267] Linking target lib/librte_dmadev.so.24.1 00:02:59.022 [237/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:59.022 [238/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:59.022 [239/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:59.022 [240/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:59.022 [241/267] Linking target lib/librte_mempool.so.24.1 00:02:59.022 [242/267] Linking target lib/librte_rcu.so.24.1 00:02:59.022 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:59.022 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:59.022 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:59.022 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:59.281 [247/267] Linking target lib/librte_mbuf.so.24.1 00:02:59.281 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:59.281 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:59.281 [250/267] Linking target lib/librte_reorder.so.24.1 00:02:59.281 [251/267] Linking target lib/librte_compressdev.so.24.1 00:02:59.281 [252/267] Linking target lib/librte_net.so.24.1 00:02:59.281 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:59.539 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:59.539 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:59.539 [256/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.539 [257/267] Linking target lib/librte_hash.so.24.1 00:02:59.539 [258/267] Linking target lib/librte_security.so.24.1 00:02:59.539 [259/267] Linking target lib/librte_cmdline.so.24.1 00:02:59.539 [260/267] Linking target lib/librte_ethdev.so.24.1 00:02:59.539 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:59.539 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:59.797 [263/267] Linking target lib/librte_power.so.24.1 00:03:00.364 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:00.364 [265/267] Linking static target lib/librte_vhost.a 00:03:01.738 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.738 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:01.738 INFO: autodetecting backend as ninja 00:03:01.738 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:13.942 CC lib/log/log_flags.o 00:03:13.942 CC lib/log/log.o 00:03:13.942 CC lib/log/log_deprecated.o 00:03:13.942 CC lib/ut/ut.o 00:03:13.942 CC lib/ut_mock/mock.o 00:03:13.942 LIB libspdk_log.a 00:03:13.942 LIB libspdk_ut_mock.a 00:03:13.942 LIB libspdk_ut.a 00:03:13.942 SO libspdk_log.so.7.1 00:03:13.942 SO libspdk_ut_mock.so.6.0 00:03:13.942 SO libspdk_ut.so.2.0 00:03:13.942 SYMLINK libspdk_log.so 00:03:13.942 SYMLINK libspdk_ut_mock.so 00:03:13.942 SYMLINK libspdk_ut.so 00:03:13.942 CXX lib/trace_parser/trace.o 00:03:13.942 CC lib/dma/dma.o 00:03:13.942 CC lib/util/base64.o 00:03:13.942 CC lib/util/cpuset.o 00:03:13.942 CC lib/util/crc32.o 00:03:13.942 CC lib/util/bit_array.o 00:03:13.942 CC lib/util/crc16.o 00:03:13.942 CC lib/util/crc32c.o 00:03:13.942 CC lib/ioat/ioat.o 00:03:13.942 CC lib/vfio_user/host/vfio_user_pci.o 00:03:13.942 CC lib/util/crc32_ieee.o 00:03:13.942 CC lib/util/crc64.o 00:03:13.942 CC lib/util/dif.o 00:03:13.942 CC lib/util/fd.o 00:03:14.200 CC lib/util/fd_group.o 00:03:14.200 LIB libspdk_dma.a 00:03:14.200 SO libspdk_dma.so.5.0 00:03:14.200 CC lib/util/file.o 00:03:14.200 CC lib/util/hexlify.o 00:03:14.200 CC lib/util/iov.o 00:03:14.200 SYMLINK libspdk_dma.so 00:03:14.200 CC lib/util/math.o 00:03:14.200 CC lib/util/net.o 00:03:14.200 LIB libspdk_ioat.a 00:03:14.200 SO libspdk_ioat.so.7.0 00:03:14.200 CC lib/vfio_user/host/vfio_user.o 00:03:14.200 CC lib/util/pipe.o 00:03:14.200 CC lib/util/strerror_tls.o 00:03:14.200 SYMLINK libspdk_ioat.so 00:03:14.200 CC lib/util/string.o 00:03:14.200 CC lib/util/uuid.o 00:03:14.200 CC lib/util/xor.o 00:03:14.200 CC lib/util/zipf.o 00:03:14.200 CC lib/util/md5.o 00:03:14.458 LIB libspdk_vfio_user.a 00:03:14.458 SO libspdk_vfio_user.so.5.0 00:03:14.458 SYMLINK libspdk_vfio_user.so 00:03:14.716 LIB libspdk_util.a 00:03:14.716 SO libspdk_util.so.10.1 00:03:14.716 LIB libspdk_trace_parser.a 00:03:14.716 SO libspdk_trace_parser.so.6.0 00:03:14.974 SYMLINK libspdk_util.so 00:03:14.974 SYMLINK libspdk_trace_parser.so 00:03:14.974 CC lib/json/json_parse.o 00:03:14.974 CC lib/json/json_write.o 00:03:14.974 CC lib/json/json_util.o 00:03:14.974 CC lib/idxd/idxd.o 00:03:14.974 CC lib/idxd/idxd_kernel.o 00:03:14.974 CC lib/idxd/idxd_user.o 00:03:14.974 CC lib/conf/conf.o 00:03:14.974 CC lib/rdma_utils/rdma_utils.o 00:03:14.974 CC lib/env_dpdk/env.o 00:03:14.974 CC lib/vmd/vmd.o 00:03:15.232 CC lib/env_dpdk/memory.o 00:03:15.232 LIB libspdk_conf.a 00:03:15.232 SO libspdk_conf.so.6.0 00:03:15.232 LIB libspdk_rdma_utils.a 00:03:15.232 CC lib/env_dpdk/pci.o 00:03:15.232 SO libspdk_rdma_utils.so.1.0 00:03:15.232 SYMLINK libspdk_conf.so 00:03:15.232 CC lib/vmd/led.o 00:03:15.232 CC lib/env_dpdk/init.o 00:03:15.232 CC lib/env_dpdk/threads.o 00:03:15.232 LIB libspdk_json.a 00:03:15.232 SYMLINK libspdk_rdma_utils.so 00:03:15.232 CC lib/env_dpdk/pci_ioat.o 00:03:15.232 SO libspdk_json.so.6.0 00:03:15.232 SYMLINK libspdk_json.so 00:03:15.232 CC lib/env_dpdk/pci_virtio.o 00:03:15.232 CC lib/env_dpdk/pci_vmd.o 00:03:15.232 CC lib/env_dpdk/pci_idxd.o 00:03:15.489 CC lib/env_dpdk/pci_event.o 00:03:15.489 CC lib/rdma_provider/common.o 00:03:15.489 CC lib/env_dpdk/sigbus_handler.o 00:03:15.489 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:15.489 CC lib/env_dpdk/pci_dpdk.o 00:03:15.489 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:15.489 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:15.489 LIB libspdk_idxd.a 00:03:15.489 CC lib/jsonrpc/jsonrpc_server.o 00:03:15.489 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:15.747 LIB libspdk_rdma_provider.a 00:03:15.747 CC lib/jsonrpc/jsonrpc_client.o 00:03:15.747 SO libspdk_idxd.so.12.1 00:03:15.747 SO libspdk_rdma_provider.so.7.0 00:03:15.747 LIB libspdk_vmd.a 00:03:15.747 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:15.747 SO libspdk_vmd.so.6.0 00:03:15.747 SYMLINK libspdk_idxd.so 00:03:15.747 SYMLINK libspdk_rdma_provider.so 00:03:15.747 SYMLINK libspdk_vmd.so 00:03:16.005 LIB libspdk_jsonrpc.a 00:03:16.005 SO libspdk_jsonrpc.so.6.0 00:03:16.005 SYMLINK libspdk_jsonrpc.so 00:03:16.262 CC lib/rpc/rpc.o 00:03:16.520 LIB libspdk_env_dpdk.a 00:03:16.520 LIB libspdk_rpc.a 00:03:16.520 SO libspdk_rpc.so.6.0 00:03:16.520 SO libspdk_env_dpdk.so.15.1 00:03:16.520 SYMLINK libspdk_rpc.so 00:03:16.520 SYMLINK libspdk_env_dpdk.so 00:03:16.779 CC lib/trace/trace_rpc.o 00:03:16.779 CC lib/trace/trace.o 00:03:16.779 CC lib/trace/trace_flags.o 00:03:16.779 CC lib/notify/notify.o 00:03:16.779 CC lib/keyring/keyring_rpc.o 00:03:16.779 CC lib/notify/notify_rpc.o 00:03:16.779 CC lib/keyring/keyring.o 00:03:16.779 LIB libspdk_notify.a 00:03:16.779 SO libspdk_notify.so.6.0 00:03:16.779 LIB libspdk_keyring.a 00:03:16.779 SYMLINK libspdk_notify.so 00:03:17.037 SO libspdk_keyring.so.2.0 00:03:17.037 LIB libspdk_trace.a 00:03:17.037 SO libspdk_trace.so.11.0 00:03:17.037 SYMLINK libspdk_keyring.so 00:03:17.037 SYMLINK libspdk_trace.so 00:03:17.295 CC lib/thread/thread.o 00:03:17.295 CC lib/thread/iobuf.o 00:03:17.295 CC lib/sock/sock.o 00:03:17.295 CC lib/sock/sock_rpc.o 00:03:17.554 LIB libspdk_sock.a 00:03:17.554 SO libspdk_sock.so.10.0 00:03:17.812 SYMLINK libspdk_sock.so 00:03:18.070 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:18.070 CC lib/nvme/nvme_ctrlr.o 00:03:18.070 CC lib/nvme/nvme_qpair.o 00:03:18.070 CC lib/nvme/nvme_fabric.o 00:03:18.070 CC lib/nvme/nvme_pcie.o 00:03:18.070 CC lib/nvme/nvme_pcie_common.o 00:03:18.070 CC lib/nvme/nvme_ns_cmd.o 00:03:18.070 CC lib/nvme/nvme_ns.o 00:03:18.070 CC lib/nvme/nvme.o 00:03:18.636 CC lib/nvme/nvme_quirks.o 00:03:18.636 CC lib/nvme/nvme_transport.o 00:03:18.636 CC lib/nvme/nvme_discovery.o 00:03:18.636 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:18.636 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:18.636 CC lib/nvme/nvme_tcp.o 00:03:18.636 CC lib/nvme/nvme_opal.o 00:03:18.636 LIB libspdk_thread.a 00:03:18.636 SO libspdk_thread.so.11.0 00:03:18.894 SYMLINK libspdk_thread.so 00:03:18.894 CC lib/nvme/nvme_io_msg.o 00:03:18.894 CC lib/nvme/nvme_poll_group.o 00:03:18.894 CC lib/nvme/nvme_zns.o 00:03:18.894 CC lib/nvme/nvme_stubs.o 00:03:18.894 CC lib/nvme/nvme_auth.o 00:03:19.153 CC lib/nvme/nvme_cuse.o 00:03:19.153 CC lib/nvme/nvme_rdma.o 00:03:19.413 CC lib/accel/accel.o 00:03:19.413 CC lib/blob/blobstore.o 00:03:19.413 CC lib/accel/accel_rpc.o 00:03:19.413 CC lib/init/json_config.o 00:03:19.413 CC lib/virtio/virtio.o 00:03:19.673 CC lib/virtio/virtio_vhost_user.o 00:03:19.673 CC lib/init/subsystem.o 00:03:19.673 CC lib/virtio/virtio_vfio_user.o 00:03:19.673 CC lib/init/subsystem_rpc.o 00:03:19.673 CC lib/init/rpc.o 00:03:19.933 CC lib/accel/accel_sw.o 00:03:19.933 CC lib/virtio/virtio_pci.o 00:03:19.933 CC lib/blob/request.o 00:03:19.933 LIB libspdk_init.a 00:03:19.933 CC lib/fsdev/fsdev.o 00:03:19.933 SO libspdk_init.so.6.0 00:03:19.933 SYMLINK libspdk_init.so 00:03:19.933 CC lib/fsdev/fsdev_io.o 00:03:20.192 LIB libspdk_virtio.a 00:03:20.192 CC lib/blob/zeroes.o 00:03:20.192 SO libspdk_virtio.so.7.0 00:03:20.192 CC lib/blob/blob_bs_dev.o 00:03:20.192 CC lib/fsdev/fsdev_rpc.o 00:03:20.192 SYMLINK libspdk_virtio.so 00:03:20.192 CC lib/event/app.o 00:03:20.192 CC lib/event/reactor.o 00:03:20.192 LIB libspdk_nvme.a 00:03:20.192 CC lib/event/log_rpc.o 00:03:20.192 CC lib/event/app_rpc.o 00:03:20.192 LIB libspdk_accel.a 00:03:20.450 SO libspdk_nvme.so.15.0 00:03:20.450 CC lib/event/scheduler_static.o 00:03:20.450 SO libspdk_accel.so.16.0 00:03:20.450 SYMLINK libspdk_accel.so 00:03:20.450 SYMLINK libspdk_nvme.so 00:03:20.450 LIB libspdk_fsdev.a 00:03:20.708 SO libspdk_fsdev.so.2.0 00:03:20.708 LIB libspdk_event.a 00:03:20.708 CC lib/bdev/bdev.o 00:03:20.708 CC lib/bdev/bdev_rpc.o 00:03:20.708 CC lib/bdev/scsi_nvme.o 00:03:20.708 CC lib/bdev/part.o 00:03:20.708 CC lib/bdev/bdev_zone.o 00:03:20.708 SYMLINK libspdk_fsdev.so 00:03:20.709 SO libspdk_event.so.14.0 00:03:20.709 SYMLINK libspdk_event.so 00:03:20.709 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:21.643 LIB libspdk_fuse_dispatcher.a 00:03:21.643 SO libspdk_fuse_dispatcher.so.1.0 00:03:21.643 SYMLINK libspdk_fuse_dispatcher.so 00:03:22.223 LIB libspdk_blob.a 00:03:22.223 SO libspdk_blob.so.11.0 00:03:22.223 SYMLINK libspdk_blob.so 00:03:22.512 CC lib/blobfs/blobfs.o 00:03:22.512 CC lib/blobfs/tree.o 00:03:22.512 CC lib/lvol/lvol.o 00:03:23.078 LIB libspdk_blobfs.a 00:03:23.078 SO libspdk_blobfs.so.10.0 00:03:23.336 SYMLINK libspdk_blobfs.so 00:03:23.336 LIB libspdk_bdev.a 00:03:23.336 SO libspdk_bdev.so.17.0 00:03:23.336 LIB libspdk_lvol.a 00:03:23.336 SYMLINK libspdk_bdev.so 00:03:23.336 SO libspdk_lvol.so.10.0 00:03:23.593 SYMLINK libspdk_lvol.so 00:03:23.593 CC lib/ftl/ftl_core.o 00:03:23.593 CC lib/ftl/ftl_init.o 00:03:23.593 CC lib/ftl/ftl_layout.o 00:03:23.593 CC lib/ftl/ftl_debug.o 00:03:23.593 CC lib/ftl/ftl_io.o 00:03:23.593 CC lib/ftl/ftl_sb.o 00:03:23.593 CC lib/nvmf/ctrlr.o 00:03:23.593 CC lib/ublk/ublk.o 00:03:23.593 CC lib/scsi/dev.o 00:03:23.593 CC lib/nbd/nbd.o 00:03:23.593 CC lib/nbd/nbd_rpc.o 00:03:23.593 CC lib/ublk/ublk_rpc.o 00:03:23.851 CC lib/ftl/ftl_l2p.o 00:03:23.851 CC lib/scsi/lun.o 00:03:23.851 CC lib/ftl/ftl_l2p_flat.o 00:03:23.851 CC lib/ftl/ftl_nv_cache.o 00:03:23.851 CC lib/nvmf/ctrlr_discovery.o 00:03:23.851 CC lib/nvmf/ctrlr_bdev.o 00:03:23.851 LIB libspdk_nbd.a 00:03:23.851 CC lib/ftl/ftl_band.o 00:03:23.851 SO libspdk_nbd.so.7.0 00:03:23.851 CC lib/ftl/ftl_band_ops.o 00:03:23.851 CC lib/scsi/port.o 00:03:23.851 SYMLINK libspdk_nbd.so 00:03:23.851 CC lib/scsi/scsi.o 00:03:24.108 CC lib/scsi/scsi_bdev.o 00:03:24.108 CC lib/scsi/scsi_pr.o 00:03:24.108 CC lib/scsi/scsi_rpc.o 00:03:24.108 CC lib/scsi/task.o 00:03:24.108 CC lib/ftl/ftl_writer.o 00:03:24.108 CC lib/ftl/ftl_rq.o 00:03:24.109 LIB libspdk_ublk.a 00:03:24.109 SO libspdk_ublk.so.3.0 00:03:24.367 CC lib/nvmf/subsystem.o 00:03:24.367 SYMLINK libspdk_ublk.so 00:03:24.367 CC lib/nvmf/nvmf.o 00:03:24.367 CC lib/nvmf/nvmf_rpc.o 00:03:24.367 CC lib/ftl/ftl_reloc.o 00:03:24.367 CC lib/ftl/ftl_l2p_cache.o 00:03:24.367 CC lib/ftl/ftl_p2l.o 00:03:24.367 CC lib/ftl/ftl_p2l_log.o 00:03:24.367 LIB libspdk_scsi.a 00:03:24.367 SO libspdk_scsi.so.9.0 00:03:24.624 SYMLINK libspdk_scsi.so 00:03:24.624 CC lib/ftl/mngt/ftl_mngt.o 00:03:24.624 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:24.624 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:24.624 CC lib/nvmf/transport.o 00:03:24.624 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:24.882 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:24.882 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:24.882 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:24.882 CC lib/iscsi/conn.o 00:03:24.882 CC lib/iscsi/init_grp.o 00:03:24.882 CC lib/vhost/vhost.o 00:03:25.140 CC lib/nvmf/tcp.o 00:03:25.140 CC lib/iscsi/iscsi.o 00:03:25.140 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:25.140 CC lib/iscsi/param.o 00:03:25.140 CC lib/iscsi/portal_grp.o 00:03:25.140 CC lib/nvmf/stubs.o 00:03:25.140 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:25.398 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:25.398 CC lib/vhost/vhost_rpc.o 00:03:25.398 CC lib/nvmf/mdns_server.o 00:03:25.398 CC lib/iscsi/tgt_node.o 00:03:25.398 CC lib/vhost/vhost_scsi.o 00:03:25.398 CC lib/vhost/vhost_blk.o 00:03:25.656 CC lib/vhost/rte_vhost_user.o 00:03:25.656 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:25.656 CC lib/nvmf/rdma.o 00:03:25.656 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:25.656 CC lib/iscsi/iscsi_subsystem.o 00:03:25.914 CC lib/iscsi/iscsi_rpc.o 00:03:25.914 CC lib/iscsi/task.o 00:03:26.172 CC lib/nvmf/auth.o 00:03:26.172 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:26.172 CC lib/ftl/utils/ftl_conf.o 00:03:26.172 CC lib/ftl/utils/ftl_md.o 00:03:26.172 CC lib/ftl/utils/ftl_mempool.o 00:03:26.172 LIB libspdk_iscsi.a 00:03:26.172 CC lib/ftl/utils/ftl_bitmap.o 00:03:26.172 CC lib/ftl/utils/ftl_property.o 00:03:26.172 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:26.172 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:26.172 SO libspdk_iscsi.so.8.0 00:03:26.430 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:26.430 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:26.430 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:26.430 SYMLINK libspdk_iscsi.so 00:03:26.430 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:26.430 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:26.430 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:26.430 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:26.430 LIB libspdk_vhost.a 00:03:26.430 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:26.430 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:26.430 SO libspdk_vhost.so.8.0 00:03:26.430 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:26.688 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:26.688 CC lib/ftl/base/ftl_base_dev.o 00:03:26.688 CC lib/ftl/base/ftl_base_bdev.o 00:03:26.688 SYMLINK libspdk_vhost.so 00:03:26.688 CC lib/ftl/ftl_trace.o 00:03:26.947 LIB libspdk_ftl.a 00:03:26.947 SO libspdk_ftl.so.9.0 00:03:27.205 SYMLINK libspdk_ftl.so 00:03:27.464 LIB libspdk_nvmf.a 00:03:27.464 SO libspdk_nvmf.so.20.0 00:03:27.722 SYMLINK libspdk_nvmf.so 00:03:27.981 CC module/env_dpdk/env_dpdk_rpc.o 00:03:27.981 CC module/fsdev/aio/fsdev_aio.o 00:03:27.981 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:27.981 CC module/accel/error/accel_error.o 00:03:27.981 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:27.981 CC module/keyring/file/keyring.o 00:03:27.981 CC module/scheduler/gscheduler/gscheduler.o 00:03:27.981 CC module/accel/ioat/accel_ioat.o 00:03:27.981 CC module/blob/bdev/blob_bdev.o 00:03:27.981 CC module/sock/posix/posix.o 00:03:27.981 LIB libspdk_env_dpdk_rpc.a 00:03:27.981 SO libspdk_env_dpdk_rpc.so.6.0 00:03:27.981 LIB libspdk_scheduler_gscheduler.a 00:03:27.981 LIB libspdk_scheduler_dpdk_governor.a 00:03:27.981 SYMLINK libspdk_env_dpdk_rpc.so 00:03:27.981 SO libspdk_scheduler_gscheduler.so.4.0 00:03:27.981 CC module/accel/error/accel_error_rpc.o 00:03:27.981 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:27.981 LIB libspdk_scheduler_dynamic.a 00:03:28.239 SYMLINK libspdk_scheduler_gscheduler.so 00:03:28.239 CC module/keyring/file/keyring_rpc.o 00:03:28.240 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:28.240 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:28.240 SO libspdk_scheduler_dynamic.so.4.0 00:03:28.240 CC module/fsdev/aio/linux_aio_mgr.o 00:03:28.240 CC module/accel/ioat/accel_ioat_rpc.o 00:03:28.240 SYMLINK libspdk_scheduler_dynamic.so 00:03:28.240 LIB libspdk_blob_bdev.a 00:03:28.240 SO libspdk_blob_bdev.so.11.0 00:03:28.240 LIB libspdk_accel_error.a 00:03:28.240 LIB libspdk_keyring_file.a 00:03:28.240 LIB libspdk_accel_ioat.a 00:03:28.240 SO libspdk_accel_error.so.2.0 00:03:28.240 SO libspdk_keyring_file.so.2.0 00:03:28.240 SO libspdk_accel_ioat.so.6.0 00:03:28.240 SYMLINK libspdk_blob_bdev.so 00:03:28.240 CC module/keyring/linux/keyring.o 00:03:28.240 CC module/accel/dsa/accel_dsa.o 00:03:28.240 SYMLINK libspdk_accel_ioat.so 00:03:28.240 SYMLINK libspdk_accel_error.so 00:03:28.240 SYMLINK libspdk_keyring_file.so 00:03:28.240 CC module/keyring/linux/keyring_rpc.o 00:03:28.240 CC module/accel/dsa/accel_dsa_rpc.o 00:03:28.499 LIB libspdk_keyring_linux.a 00:03:28.499 SO libspdk_keyring_linux.so.1.0 00:03:28.499 CC module/accel/iaa/accel_iaa.o 00:03:28.499 CC module/bdev/delay/vbdev_delay.o 00:03:28.499 LIB libspdk_fsdev_aio.a 00:03:28.499 CC module/bdev/error/vbdev_error.o 00:03:28.499 CC module/blobfs/bdev/blobfs_bdev.o 00:03:28.499 SYMLINK libspdk_keyring_linux.so 00:03:28.499 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:28.499 CC module/bdev/gpt/gpt.o 00:03:28.499 LIB libspdk_accel_dsa.a 00:03:28.499 SO libspdk_fsdev_aio.so.1.0 00:03:28.499 SO libspdk_accel_dsa.so.5.0 00:03:28.499 CC module/bdev/lvol/vbdev_lvol.o 00:03:28.499 SYMLINK libspdk_fsdev_aio.so 00:03:28.499 SYMLINK libspdk_accel_dsa.so 00:03:28.499 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:28.499 CC module/accel/iaa/accel_iaa_rpc.o 00:03:28.499 CC module/bdev/error/vbdev_error_rpc.o 00:03:28.499 LIB libspdk_sock_posix.a 00:03:28.499 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:28.758 SO libspdk_sock_posix.so.6.0 00:03:28.758 LIB libspdk_blobfs_bdev.a 00:03:28.758 SO libspdk_blobfs_bdev.so.6.0 00:03:28.758 LIB libspdk_accel_iaa.a 00:03:28.758 CC module/bdev/gpt/vbdev_gpt.o 00:03:28.758 SO libspdk_accel_iaa.so.3.0 00:03:28.758 SYMLINK libspdk_sock_posix.so 00:03:28.758 SYMLINK libspdk_blobfs_bdev.so 00:03:28.758 SYMLINK libspdk_accel_iaa.so 00:03:28.758 LIB libspdk_bdev_error.a 00:03:28.758 SO libspdk_bdev_error.so.6.0 00:03:28.758 LIB libspdk_bdev_delay.a 00:03:28.758 SO libspdk_bdev_delay.so.6.0 00:03:28.758 SYMLINK libspdk_bdev_error.so 00:03:28.758 CC module/bdev/malloc/bdev_malloc.o 00:03:28.758 CC module/bdev/nvme/bdev_nvme.o 00:03:28.758 CC module/bdev/null/bdev_null.o 00:03:28.758 CC module/bdev/passthru/vbdev_passthru.o 00:03:28.758 SYMLINK libspdk_bdev_delay.so 00:03:28.758 CC module/bdev/null/bdev_null_rpc.o 00:03:29.018 CC module/bdev/raid/bdev_raid.o 00:03:29.018 CC module/bdev/raid/bdev_raid_rpc.o 00:03:29.018 LIB libspdk_bdev_lvol.a 00:03:29.018 CC module/bdev/split/vbdev_split.o 00:03:29.018 LIB libspdk_bdev_gpt.a 00:03:29.018 SO libspdk_bdev_lvol.so.6.0 00:03:29.018 SO libspdk_bdev_gpt.so.6.0 00:03:29.018 SYMLINK libspdk_bdev_lvol.so 00:03:29.018 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:29.018 CC module/bdev/nvme/nvme_rpc.o 00:03:29.018 SYMLINK libspdk_bdev_gpt.so 00:03:29.018 CC module/bdev/split/vbdev_split_rpc.o 00:03:29.018 LIB libspdk_bdev_null.a 00:03:29.018 SO libspdk_bdev_null.so.6.0 00:03:29.018 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:29.018 CC module/bdev/nvme/bdev_mdns_client.o 00:03:29.018 SYMLINK libspdk_bdev_null.so 00:03:29.018 CC module/bdev/raid/bdev_raid_sb.o 00:03:29.018 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:29.018 CC module/bdev/raid/raid0.o 00:03:29.018 LIB libspdk_bdev_split.a 00:03:29.018 SO libspdk_bdev_split.so.6.0 00:03:29.279 LIB libspdk_bdev_passthru.a 00:03:29.279 SYMLINK libspdk_bdev_split.so 00:03:29.279 SO libspdk_bdev_passthru.so.6.0 00:03:29.279 LIB libspdk_bdev_malloc.a 00:03:29.279 CC module/bdev/nvme/vbdev_opal.o 00:03:29.279 CC module/bdev/raid/raid1.o 00:03:29.279 SO libspdk_bdev_malloc.so.6.0 00:03:29.279 SYMLINK libspdk_bdev_passthru.so 00:03:29.279 CC module/bdev/raid/concat.o 00:03:29.279 SYMLINK libspdk_bdev_malloc.so 00:03:29.279 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:29.279 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:29.279 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:29.279 CC module/bdev/xnvme/bdev_xnvme.o 00:03:29.539 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:29.539 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:29.539 LIB libspdk_bdev_zone_block.a 00:03:29.539 SO libspdk_bdev_zone_block.so.6.0 00:03:29.539 CC module/bdev/aio/bdev_aio.o 00:03:29.539 CC module/bdev/aio/bdev_aio_rpc.o 00:03:29.539 CC module/bdev/ftl/bdev_ftl.o 00:03:29.539 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:29.539 CC module/bdev/iscsi/bdev_iscsi.o 00:03:29.539 LIB libspdk_bdev_xnvme.a 00:03:29.539 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:29.539 SYMLINK libspdk_bdev_zone_block.so 00:03:29.539 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:29.539 SO libspdk_bdev_xnvme.so.3.0 00:03:29.797 LIB libspdk_bdev_raid.a 00:03:29.797 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:29.797 SYMLINK libspdk_bdev_xnvme.so 00:03:29.797 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:29.797 SO libspdk_bdev_raid.so.6.0 00:03:29.797 SYMLINK libspdk_bdev_raid.so 00:03:29.797 LIB libspdk_bdev_ftl.a 00:03:29.797 LIB libspdk_bdev_aio.a 00:03:29.797 LIB libspdk_bdev_iscsi.a 00:03:29.797 SO libspdk_bdev_ftl.so.6.0 00:03:29.797 SO libspdk_bdev_aio.so.6.0 00:03:29.797 SO libspdk_bdev_iscsi.so.6.0 00:03:30.056 SYMLINK libspdk_bdev_ftl.so 00:03:30.056 SYMLINK libspdk_bdev_aio.so 00:03:30.056 SYMLINK libspdk_bdev_iscsi.so 00:03:30.056 LIB libspdk_bdev_virtio.a 00:03:30.056 SO libspdk_bdev_virtio.so.6.0 00:03:30.056 SYMLINK libspdk_bdev_virtio.so 00:03:30.992 LIB libspdk_bdev_nvme.a 00:03:30.992 SO libspdk_bdev_nvme.so.7.1 00:03:31.250 SYMLINK libspdk_bdev_nvme.so 00:03:31.508 CC module/event/subsystems/keyring/keyring.o 00:03:31.508 CC module/event/subsystems/vmd/vmd.o 00:03:31.508 CC module/event/subsystems/fsdev/fsdev.o 00:03:31.508 CC module/event/subsystems/scheduler/scheduler.o 00:03:31.508 CC module/event/subsystems/iobuf/iobuf.o 00:03:31.508 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:31.508 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:31.508 CC module/event/subsystems/sock/sock.o 00:03:31.508 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:31.766 LIB libspdk_event_scheduler.a 00:03:31.766 LIB libspdk_event_fsdev.a 00:03:31.766 LIB libspdk_event_keyring.a 00:03:31.766 LIB libspdk_event_iobuf.a 00:03:31.766 LIB libspdk_event_vhost_blk.a 00:03:31.766 SO libspdk_event_scheduler.so.4.0 00:03:31.766 SO libspdk_event_fsdev.so.1.0 00:03:31.766 LIB libspdk_event_vmd.a 00:03:31.766 SO libspdk_event_keyring.so.1.0 00:03:31.766 SO libspdk_event_iobuf.so.3.0 00:03:31.766 SO libspdk_event_vhost_blk.so.3.0 00:03:31.766 LIB libspdk_event_sock.a 00:03:31.766 SO libspdk_event_vmd.so.6.0 00:03:31.766 SYMLINK libspdk_event_scheduler.so 00:03:31.766 SO libspdk_event_sock.so.5.0 00:03:31.766 SYMLINK libspdk_event_keyring.so 00:03:31.766 SYMLINK libspdk_event_fsdev.so 00:03:31.766 SYMLINK libspdk_event_iobuf.so 00:03:31.766 SYMLINK libspdk_event_vhost_blk.so 00:03:31.766 SYMLINK libspdk_event_vmd.so 00:03:31.766 SYMLINK libspdk_event_sock.so 00:03:32.025 CC module/event/subsystems/accel/accel.o 00:03:32.025 LIB libspdk_event_accel.a 00:03:32.025 SO libspdk_event_accel.so.6.0 00:03:32.025 SYMLINK libspdk_event_accel.so 00:03:32.283 CC module/event/subsystems/bdev/bdev.o 00:03:32.541 LIB libspdk_event_bdev.a 00:03:32.541 SO libspdk_event_bdev.so.6.0 00:03:32.541 SYMLINK libspdk_event_bdev.so 00:03:32.800 CC module/event/subsystems/nbd/nbd.o 00:03:32.800 CC module/event/subsystems/scsi/scsi.o 00:03:32.800 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:32.800 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:32.800 CC module/event/subsystems/ublk/ublk.o 00:03:32.800 LIB libspdk_event_nbd.a 00:03:32.800 LIB libspdk_event_scsi.a 00:03:32.800 SO libspdk_event_nbd.so.6.0 00:03:32.800 LIB libspdk_event_ublk.a 00:03:32.800 SO libspdk_event_ublk.so.3.0 00:03:32.800 SO libspdk_event_scsi.so.6.0 00:03:32.800 SYMLINK libspdk_event_nbd.so 00:03:33.059 SYMLINK libspdk_event_ublk.so 00:03:33.059 SYMLINK libspdk_event_scsi.so 00:03:33.059 LIB libspdk_event_nvmf.a 00:03:33.059 SO libspdk_event_nvmf.so.6.0 00:03:33.059 SYMLINK libspdk_event_nvmf.so 00:03:33.059 CC module/event/subsystems/iscsi/iscsi.o 00:03:33.059 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:33.317 LIB libspdk_event_vhost_scsi.a 00:03:33.317 LIB libspdk_event_iscsi.a 00:03:33.317 SO libspdk_event_vhost_scsi.so.3.0 00:03:33.317 SO libspdk_event_iscsi.so.6.0 00:03:33.317 SYMLINK libspdk_event_vhost_scsi.so 00:03:33.317 SYMLINK libspdk_event_iscsi.so 00:03:33.575 SO libspdk.so.6.0 00:03:33.575 SYMLINK libspdk.so 00:03:33.575 TEST_HEADER include/spdk/accel.h 00:03:33.575 TEST_HEADER include/spdk/accel_module.h 00:03:33.575 TEST_HEADER include/spdk/assert.h 00:03:33.575 TEST_HEADER include/spdk/barrier.h 00:03:33.575 CXX app/trace/trace.o 00:03:33.575 CC test/rpc_client/rpc_client_test.o 00:03:33.575 TEST_HEADER include/spdk/base64.h 00:03:33.575 TEST_HEADER include/spdk/bdev.h 00:03:33.575 TEST_HEADER include/spdk/bdev_module.h 00:03:33.575 TEST_HEADER include/spdk/bdev_zone.h 00:03:33.575 TEST_HEADER include/spdk/bit_array.h 00:03:33.575 TEST_HEADER include/spdk/bit_pool.h 00:03:33.575 TEST_HEADER include/spdk/blob_bdev.h 00:03:33.575 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:33.575 TEST_HEADER include/spdk/blobfs.h 00:03:33.575 TEST_HEADER include/spdk/blob.h 00:03:33.575 TEST_HEADER include/spdk/conf.h 00:03:33.575 TEST_HEADER include/spdk/config.h 00:03:33.575 TEST_HEADER include/spdk/cpuset.h 00:03:33.575 TEST_HEADER include/spdk/crc16.h 00:03:33.575 TEST_HEADER include/spdk/crc32.h 00:03:33.575 TEST_HEADER include/spdk/crc64.h 00:03:33.575 TEST_HEADER include/spdk/dif.h 00:03:33.575 TEST_HEADER include/spdk/dma.h 00:03:33.575 TEST_HEADER include/spdk/endian.h 00:03:33.575 TEST_HEADER include/spdk/env_dpdk.h 00:03:33.575 TEST_HEADER include/spdk/env.h 00:03:33.575 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:33.575 TEST_HEADER include/spdk/event.h 00:03:33.575 TEST_HEADER include/spdk/fd_group.h 00:03:33.575 TEST_HEADER include/spdk/fd.h 00:03:33.575 TEST_HEADER include/spdk/file.h 00:03:33.575 TEST_HEADER include/spdk/fsdev.h 00:03:33.575 TEST_HEADER include/spdk/fsdev_module.h 00:03:33.575 TEST_HEADER include/spdk/ftl.h 00:03:33.575 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:33.575 TEST_HEADER include/spdk/gpt_spec.h 00:03:33.575 TEST_HEADER include/spdk/hexlify.h 00:03:33.575 TEST_HEADER include/spdk/histogram_data.h 00:03:33.575 TEST_HEADER include/spdk/idxd.h 00:03:33.575 CC examples/ioat/perf/perf.o 00:03:33.575 TEST_HEADER include/spdk/idxd_spec.h 00:03:33.575 TEST_HEADER include/spdk/init.h 00:03:33.575 CC examples/util/zipf/zipf.o 00:03:33.575 TEST_HEADER include/spdk/ioat.h 00:03:33.575 TEST_HEADER include/spdk/ioat_spec.h 00:03:33.575 TEST_HEADER include/spdk/iscsi_spec.h 00:03:33.575 TEST_HEADER include/spdk/json.h 00:03:33.575 TEST_HEADER include/spdk/jsonrpc.h 00:03:33.833 TEST_HEADER include/spdk/keyring.h 00:03:33.833 TEST_HEADER include/spdk/keyring_module.h 00:03:33.833 TEST_HEADER include/spdk/likely.h 00:03:33.833 TEST_HEADER include/spdk/log.h 00:03:33.833 TEST_HEADER include/spdk/lvol.h 00:03:33.833 TEST_HEADER include/spdk/md5.h 00:03:33.833 CC test/thread/poller_perf/poller_perf.o 00:03:33.833 TEST_HEADER include/spdk/memory.h 00:03:33.833 TEST_HEADER include/spdk/mmio.h 00:03:33.833 TEST_HEADER include/spdk/nbd.h 00:03:33.833 TEST_HEADER include/spdk/net.h 00:03:33.833 TEST_HEADER include/spdk/notify.h 00:03:33.833 TEST_HEADER include/spdk/nvme.h 00:03:33.833 TEST_HEADER include/spdk/nvme_intel.h 00:03:33.833 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:33.833 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:33.833 TEST_HEADER include/spdk/nvme_spec.h 00:03:33.833 TEST_HEADER include/spdk/nvme_zns.h 00:03:33.833 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:33.833 CC test/app/bdev_svc/bdev_svc.o 00:03:33.833 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:33.833 TEST_HEADER include/spdk/nvmf.h 00:03:33.833 TEST_HEADER include/spdk/nvmf_spec.h 00:03:33.833 TEST_HEADER include/spdk/nvmf_transport.h 00:03:33.833 CC test/dma/test_dma/test_dma.o 00:03:33.833 TEST_HEADER include/spdk/opal.h 00:03:33.833 TEST_HEADER include/spdk/opal_spec.h 00:03:33.833 TEST_HEADER include/spdk/pci_ids.h 00:03:33.833 TEST_HEADER include/spdk/pipe.h 00:03:33.833 TEST_HEADER include/spdk/queue.h 00:03:33.833 TEST_HEADER include/spdk/reduce.h 00:03:33.833 TEST_HEADER include/spdk/rpc.h 00:03:33.833 TEST_HEADER include/spdk/scheduler.h 00:03:33.833 TEST_HEADER include/spdk/scsi.h 00:03:33.833 TEST_HEADER include/spdk/scsi_spec.h 00:03:33.833 TEST_HEADER include/spdk/sock.h 00:03:33.833 TEST_HEADER include/spdk/stdinc.h 00:03:33.833 TEST_HEADER include/spdk/string.h 00:03:33.833 TEST_HEADER include/spdk/thread.h 00:03:33.833 TEST_HEADER include/spdk/trace.h 00:03:33.833 TEST_HEADER include/spdk/trace_parser.h 00:03:33.833 TEST_HEADER include/spdk/tree.h 00:03:33.833 TEST_HEADER include/spdk/ublk.h 00:03:33.833 TEST_HEADER include/spdk/util.h 00:03:33.833 TEST_HEADER include/spdk/uuid.h 00:03:33.833 CC test/env/mem_callbacks/mem_callbacks.o 00:03:33.833 TEST_HEADER include/spdk/version.h 00:03:33.834 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:33.834 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:33.834 TEST_HEADER include/spdk/vhost.h 00:03:33.834 TEST_HEADER include/spdk/vmd.h 00:03:33.834 TEST_HEADER include/spdk/xor.h 00:03:33.834 TEST_HEADER include/spdk/zipf.h 00:03:33.834 CXX test/cpp_headers/accel.o 00:03:33.834 LINK rpc_client_test 00:03:33.834 LINK interrupt_tgt 00:03:33.834 LINK poller_perf 00:03:33.834 LINK zipf 00:03:33.834 LINK bdev_svc 00:03:33.834 LINK ioat_perf 00:03:33.834 CXX test/cpp_headers/accel_module.o 00:03:34.090 LINK spdk_trace 00:03:34.090 CC test/env/vtophys/vtophys.o 00:03:34.090 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:34.090 CXX test/cpp_headers/assert.o 00:03:34.090 CC examples/ioat/verify/verify.o 00:03:34.090 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:34.090 CC test/event/event_perf/event_perf.o 00:03:34.090 CC test/event/reactor/reactor.o 00:03:34.090 LINK test_dma 00:03:34.090 LINK mem_callbacks 00:03:34.090 CC app/trace_record/trace_record.o 00:03:34.090 LINK vtophys 00:03:34.090 LINK env_dpdk_post_init 00:03:34.090 CXX test/cpp_headers/barrier.o 00:03:34.090 LINK event_perf 00:03:34.348 LINK reactor 00:03:34.348 LINK verify 00:03:34.348 CC test/event/reactor_perf/reactor_perf.o 00:03:34.348 CXX test/cpp_headers/base64.o 00:03:34.348 CC test/event/app_repeat/app_repeat.o 00:03:34.348 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:34.348 CC test/env/memory/memory_ut.o 00:03:34.348 LINK nvme_fuzz 00:03:34.348 CC test/event/scheduler/scheduler.o 00:03:34.348 LINK reactor_perf 00:03:34.348 LINK spdk_trace_record 00:03:34.348 CC test/app/histogram_perf/histogram_perf.o 00:03:34.348 CXX test/cpp_headers/bdev.o 00:03:34.605 LINK app_repeat 00:03:34.605 LINK histogram_perf 00:03:34.606 CC examples/thread/thread/thread_ex.o 00:03:34.606 LINK scheduler 00:03:34.606 CC app/nvmf_tgt/nvmf_main.o 00:03:34.606 CXX test/cpp_headers/bdev_module.o 00:03:34.606 CC examples/sock/hello_world/hello_sock.o 00:03:34.606 CC examples/vmd/lsvmd/lsvmd.o 00:03:34.864 CC examples/idxd/perf/perf.o 00:03:34.864 LINK nvmf_tgt 00:03:34.864 LINK lsvmd 00:03:34.864 LINK thread 00:03:34.864 CC test/app/jsoncat/jsoncat.o 00:03:34.864 CXX test/cpp_headers/bdev_zone.o 00:03:34.864 CC test/accel/dif/dif.o 00:03:34.864 LINK hello_sock 00:03:34.864 LINK jsoncat 00:03:34.864 CXX test/cpp_headers/bit_array.o 00:03:34.864 CXX test/cpp_headers/bit_pool.o 00:03:34.864 CC examples/vmd/led/led.o 00:03:35.123 CXX test/cpp_headers/blob_bdev.o 00:03:35.123 CXX test/cpp_headers/blobfs_bdev.o 00:03:35.123 CC app/iscsi_tgt/iscsi_tgt.o 00:03:35.123 LINK idxd_perf 00:03:35.123 LINK led 00:03:35.123 CXX test/cpp_headers/blobfs.o 00:03:35.123 CC test/blobfs/mkfs/mkfs.o 00:03:35.123 CC test/app/stub/stub.o 00:03:35.123 LINK iscsi_tgt 00:03:35.381 CC test/nvme/aer/aer.o 00:03:35.381 CC test/lvol/esnap/esnap.o 00:03:35.381 CXX test/cpp_headers/blob.o 00:03:35.381 CC examples/accel/perf/accel_perf.o 00:03:35.381 LINK dif 00:03:35.381 LINK stub 00:03:35.381 LINK mkfs 00:03:35.381 LINK memory_ut 00:03:35.381 CXX test/cpp_headers/conf.o 00:03:35.381 LINK aer 00:03:35.639 CC app/spdk_tgt/spdk_tgt.o 00:03:35.639 CXX test/cpp_headers/config.o 00:03:35.639 CXX test/cpp_headers/cpuset.o 00:03:35.639 CC test/nvme/reset/reset.o 00:03:35.639 CC test/env/pci/pci_ut.o 00:03:35.639 LINK spdk_tgt 00:03:35.639 CC examples/nvme/hello_world/hello_world.o 00:03:35.639 CC examples/blob/hello_world/hello_blob.o 00:03:35.639 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:35.639 LINK accel_perf 00:03:35.639 CXX test/cpp_headers/crc16.o 00:03:35.897 CC app/spdk_lspci/spdk_lspci.o 00:03:35.897 LINK hello_blob 00:03:35.897 CXX test/cpp_headers/crc32.o 00:03:35.897 LINK reset 00:03:35.897 LINK hello_world 00:03:35.897 CC examples/blob/cli/blobcli.o 00:03:35.897 LINK spdk_lspci 00:03:35.897 LINK hello_fsdev 00:03:35.897 CXX test/cpp_headers/crc64.o 00:03:36.155 LINK iscsi_fuzz 00:03:36.155 LINK pci_ut 00:03:36.155 CC examples/nvme/reconnect/reconnect.o 00:03:36.155 CC test/nvme/sgl/sgl.o 00:03:36.155 CXX test/cpp_headers/dif.o 00:03:36.155 CC test/bdev/bdevio/bdevio.o 00:03:36.155 CC app/spdk_nvme_perf/perf.o 00:03:36.155 CC app/spdk_nvme_identify/identify.o 00:03:36.155 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:36.155 CXX test/cpp_headers/dma.o 00:03:36.412 CC app/spdk_nvme_discover/discovery_aer.o 00:03:36.412 LINK reconnect 00:03:36.412 LINK blobcli 00:03:36.412 LINK sgl 00:03:36.412 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:36.412 CXX test/cpp_headers/endian.o 00:03:36.412 LINK spdk_nvme_discover 00:03:36.412 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:36.412 CC examples/nvme/arbitration/arbitration.o 00:03:36.412 LINK bdevio 00:03:36.412 CXX test/cpp_headers/env_dpdk.o 00:03:36.670 CC test/nvme/e2edp/nvme_dp.o 00:03:36.670 CC app/spdk_top/spdk_top.o 00:03:36.670 CXX test/cpp_headers/env.o 00:03:36.670 CXX test/cpp_headers/event.o 00:03:36.670 LINK vhost_fuzz 00:03:36.670 LINK nvme_dp 00:03:36.929 CXX test/cpp_headers/fd_group.o 00:03:36.929 CC examples/nvme/hotplug/hotplug.o 00:03:36.929 LINK arbitration 00:03:36.929 LINK nvme_manage 00:03:36.929 CXX test/cpp_headers/fd.o 00:03:36.929 CXX test/cpp_headers/file.o 00:03:36.929 CC test/nvme/overhead/overhead.o 00:03:36.929 CC app/vhost/vhost.o 00:03:36.929 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:36.929 LINK spdk_nvme_perf 00:03:36.929 LINK spdk_nvme_identify 00:03:36.929 LINK hotplug 00:03:37.187 CXX test/cpp_headers/fsdev.o 00:03:37.188 CXX test/cpp_headers/fsdev_module.o 00:03:37.188 CXX test/cpp_headers/ftl.o 00:03:37.188 CC test/nvme/err_injection/err_injection.o 00:03:37.188 LINK cmb_copy 00:03:37.188 CXX test/cpp_headers/fuse_dispatcher.o 00:03:37.188 LINK vhost 00:03:37.188 LINK overhead 00:03:37.188 CXX test/cpp_headers/gpt_spec.o 00:03:37.188 CC examples/nvme/abort/abort.o 00:03:37.188 CXX test/cpp_headers/hexlify.o 00:03:37.446 LINK err_injection 00:03:37.446 LINK spdk_top 00:03:37.446 CC test/nvme/startup/startup.o 00:03:37.446 CXX test/cpp_headers/histogram_data.o 00:03:37.446 CC examples/bdev/hello_world/hello_bdev.o 00:03:37.446 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:37.446 CC app/spdk_dd/spdk_dd.o 00:03:37.446 CXX test/cpp_headers/idxd.o 00:03:37.446 CXX test/cpp_headers/idxd_spec.o 00:03:37.446 LINK pmr_persistence 00:03:37.446 CXX test/cpp_headers/init.o 00:03:37.446 LINK startup 00:03:37.446 LINK hello_bdev 00:03:37.446 CXX test/cpp_headers/ioat.o 00:03:37.446 LINK abort 00:03:37.703 CC examples/bdev/bdevperf/bdevperf.o 00:03:37.703 CXX test/cpp_headers/ioat_spec.o 00:03:37.703 CXX test/cpp_headers/iscsi_spec.o 00:03:37.703 CC test/nvme/reserve/reserve.o 00:03:37.703 CC test/nvme/simple_copy/simple_copy.o 00:03:37.703 CC app/fio/nvme/fio_plugin.o 00:03:37.703 CC app/fio/bdev/fio_plugin.o 00:03:37.703 LINK spdk_dd 00:03:37.703 CC test/nvme/connect_stress/connect_stress.o 00:03:37.703 CC test/nvme/boot_partition/boot_partition.o 00:03:37.703 CXX test/cpp_headers/json.o 00:03:37.703 LINK reserve 00:03:37.962 LINK simple_copy 00:03:37.962 CXX test/cpp_headers/jsonrpc.o 00:03:37.962 LINK boot_partition 00:03:37.962 CXX test/cpp_headers/keyring.o 00:03:37.962 LINK connect_stress 00:03:37.962 CC test/nvme/compliance/nvme_compliance.o 00:03:37.962 CXX test/cpp_headers/keyring_module.o 00:03:37.962 CC test/nvme/fused_ordering/fused_ordering.o 00:03:37.962 CXX test/cpp_headers/likely.o 00:03:37.962 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:38.222 CXX test/cpp_headers/log.o 00:03:38.222 CC test/nvme/fdp/fdp.o 00:03:38.222 CC test/nvme/cuse/cuse.o 00:03:38.222 LINK fused_ordering 00:03:38.222 LINK doorbell_aers 00:03:38.222 LINK spdk_bdev 00:03:38.222 CXX test/cpp_headers/lvol.o 00:03:38.222 LINK spdk_nvme 00:03:38.222 CXX test/cpp_headers/md5.o 00:03:38.222 CXX test/cpp_headers/memory.o 00:03:38.222 CXX test/cpp_headers/mmio.o 00:03:38.222 LINK nvme_compliance 00:03:38.222 CXX test/cpp_headers/nbd.o 00:03:38.481 CXX test/cpp_headers/net.o 00:03:38.481 CXX test/cpp_headers/notify.o 00:03:38.481 LINK bdevperf 00:03:38.481 CXX test/cpp_headers/nvme.o 00:03:38.481 CXX test/cpp_headers/nvme_intel.o 00:03:38.481 CXX test/cpp_headers/nvme_ocssd.o 00:03:38.481 LINK fdp 00:03:38.481 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:38.481 CXX test/cpp_headers/nvme_spec.o 00:03:38.481 CXX test/cpp_headers/nvme_zns.o 00:03:38.481 CXX test/cpp_headers/nvmf_cmd.o 00:03:38.481 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:38.481 CXX test/cpp_headers/nvmf.o 00:03:38.741 CXX test/cpp_headers/nvmf_spec.o 00:03:38.741 CXX test/cpp_headers/nvmf_transport.o 00:03:38.741 CXX test/cpp_headers/opal.o 00:03:38.741 CXX test/cpp_headers/opal_spec.o 00:03:38.741 CXX test/cpp_headers/pci_ids.o 00:03:38.741 CXX test/cpp_headers/pipe.o 00:03:38.741 CC examples/nvmf/nvmf/nvmf.o 00:03:38.741 CXX test/cpp_headers/queue.o 00:03:38.741 CXX test/cpp_headers/reduce.o 00:03:38.741 CXX test/cpp_headers/rpc.o 00:03:38.741 CXX test/cpp_headers/scheduler.o 00:03:38.741 CXX test/cpp_headers/scsi.o 00:03:38.741 CXX test/cpp_headers/scsi_spec.o 00:03:38.741 CXX test/cpp_headers/sock.o 00:03:38.741 CXX test/cpp_headers/stdinc.o 00:03:38.741 CXX test/cpp_headers/string.o 00:03:39.001 CXX test/cpp_headers/thread.o 00:03:39.001 CXX test/cpp_headers/trace.o 00:03:39.001 CXX test/cpp_headers/trace_parser.o 00:03:39.001 CXX test/cpp_headers/tree.o 00:03:39.001 CXX test/cpp_headers/ublk.o 00:03:39.001 CXX test/cpp_headers/util.o 00:03:39.001 CXX test/cpp_headers/uuid.o 00:03:39.001 CXX test/cpp_headers/version.o 00:03:39.001 CXX test/cpp_headers/vfio_user_pci.o 00:03:39.001 LINK nvmf 00:03:39.001 CXX test/cpp_headers/vfio_user_spec.o 00:03:39.001 CXX test/cpp_headers/vhost.o 00:03:39.001 CXX test/cpp_headers/vmd.o 00:03:39.001 CXX test/cpp_headers/xor.o 00:03:39.001 CXX test/cpp_headers/zipf.o 00:03:39.261 LINK cuse 00:03:39.830 LINK esnap 00:03:40.090 00:03:40.090 real 1m1.893s 00:03:40.090 user 5m53.264s 00:03:40.090 sys 1m0.709s 00:03:40.090 01:29:23 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:40.090 01:29:23 make -- common/autotest_common.sh@10 -- $ set +x 00:03:40.090 ************************************ 00:03:40.090 END TEST make 00:03:40.090 ************************************ 00:03:40.090 01:29:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:40.090 01:29:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:40.090 01:29:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:40.090 01:29:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.091 01:29:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:40.091 01:29:23 -- pm/common@44 -- $ pid=5083 00:03:40.091 01:29:23 -- pm/common@50 -- $ kill -TERM 5083 00:03:40.091 01:29:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.091 01:29:23 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:40.091 01:29:23 -- pm/common@44 -- $ pid=5084 00:03:40.091 01:29:23 -- pm/common@50 -- $ kill -TERM 5084 00:03:40.091 01:29:23 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:40.091 01:29:23 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:40.091 01:29:24 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:40.091 01:29:24 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:40.091 01:29:24 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:40.351 01:29:24 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:40.351 01:29:24 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:40.351 01:29:24 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:40.351 01:29:24 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:40.351 01:29:24 -- scripts/common.sh@336 -- # IFS=.-: 00:03:40.351 01:29:24 -- scripts/common.sh@336 -- # read -ra ver1 00:03:40.351 01:29:24 -- scripts/common.sh@337 -- # IFS=.-: 00:03:40.351 01:29:24 -- scripts/common.sh@337 -- # read -ra ver2 00:03:40.351 01:29:24 -- scripts/common.sh@338 -- # local 'op=<' 00:03:40.351 01:29:24 -- scripts/common.sh@340 -- # ver1_l=2 00:03:40.351 01:29:24 -- scripts/common.sh@341 -- # ver2_l=1 00:03:40.351 01:29:24 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:40.351 01:29:24 -- scripts/common.sh@344 -- # case "$op" in 00:03:40.351 01:29:24 -- scripts/common.sh@345 -- # : 1 00:03:40.351 01:29:24 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:40.351 01:29:24 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:40.351 01:29:24 -- scripts/common.sh@365 -- # decimal 1 00:03:40.351 01:29:24 -- scripts/common.sh@353 -- # local d=1 00:03:40.351 01:29:24 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:40.351 01:29:24 -- scripts/common.sh@355 -- # echo 1 00:03:40.351 01:29:24 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:40.351 01:29:24 -- scripts/common.sh@366 -- # decimal 2 00:03:40.351 01:29:24 -- scripts/common.sh@353 -- # local d=2 00:03:40.351 01:29:24 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:40.351 01:29:24 -- scripts/common.sh@355 -- # echo 2 00:03:40.352 01:29:24 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:40.352 01:29:24 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:40.352 01:29:24 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:40.352 01:29:24 -- scripts/common.sh@368 -- # return 0 00:03:40.352 01:29:24 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:40.352 01:29:24 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:40.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.352 --rc genhtml_branch_coverage=1 00:03:40.352 --rc genhtml_function_coverage=1 00:03:40.352 --rc genhtml_legend=1 00:03:40.352 --rc geninfo_all_blocks=1 00:03:40.352 --rc geninfo_unexecuted_blocks=1 00:03:40.352 00:03:40.352 ' 00:03:40.352 01:29:24 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:40.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.352 --rc genhtml_branch_coverage=1 00:03:40.352 --rc genhtml_function_coverage=1 00:03:40.352 --rc genhtml_legend=1 00:03:40.352 --rc geninfo_all_blocks=1 00:03:40.352 --rc geninfo_unexecuted_blocks=1 00:03:40.352 00:03:40.352 ' 00:03:40.352 01:29:24 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:40.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.352 --rc genhtml_branch_coverage=1 00:03:40.352 --rc genhtml_function_coverage=1 00:03:40.352 --rc genhtml_legend=1 00:03:40.352 --rc geninfo_all_blocks=1 00:03:40.352 --rc geninfo_unexecuted_blocks=1 00:03:40.352 00:03:40.352 ' 00:03:40.352 01:29:24 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:40.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:40.352 --rc genhtml_branch_coverage=1 00:03:40.352 --rc genhtml_function_coverage=1 00:03:40.352 --rc genhtml_legend=1 00:03:40.352 --rc geninfo_all_blocks=1 00:03:40.352 --rc geninfo_unexecuted_blocks=1 00:03:40.352 00:03:40.352 ' 00:03:40.352 01:29:24 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:40.352 01:29:24 -- nvmf/common.sh@7 -- # uname -s 00:03:40.352 01:29:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:40.352 01:29:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:40.352 01:29:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:40.352 01:29:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:40.352 01:29:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:40.352 01:29:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:40.352 01:29:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:40.352 01:29:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:40.352 01:29:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:40.352 01:29:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:40.352 01:29:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:917a758e-796b-4413-864e-1c730c68b4e2 00:03:40.352 01:29:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=917a758e-796b-4413-864e-1c730c68b4e2 00:03:40.352 01:29:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:40.352 01:29:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:40.352 01:29:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:40.352 01:29:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:40.352 01:29:24 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:40.352 01:29:24 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:40.352 01:29:24 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:40.352 01:29:24 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:40.352 01:29:24 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:40.352 01:29:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.352 01:29:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.352 01:29:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.352 01:29:24 -- paths/export.sh@5 -- # export PATH 00:03:40.352 01:29:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:40.352 01:29:24 -- nvmf/common.sh@51 -- # : 0 00:03:40.352 01:29:24 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:40.352 01:29:24 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:40.352 01:29:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:40.352 01:29:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:40.352 01:29:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:40.352 01:29:24 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:40.352 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:40.352 01:29:24 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:40.352 01:29:24 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:40.352 01:29:24 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:40.352 01:29:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:40.352 01:29:24 -- spdk/autotest.sh@32 -- # uname -s 00:03:40.352 01:29:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:40.352 01:29:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:40.352 01:29:24 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:40.352 01:29:24 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:40.352 01:29:24 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:40.352 01:29:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:40.352 01:29:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:40.352 01:29:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:40.352 01:29:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:40.352 01:29:24 -- spdk/autotest.sh@48 -- # udevadm_pid=54176 00:03:40.352 01:29:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:40.352 01:29:24 -- pm/common@17 -- # local monitor 00:03:40.352 01:29:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.352 01:29:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:40.352 01:29:24 -- pm/common@25 -- # sleep 1 00:03:40.352 01:29:24 -- pm/common@21 -- # date +%s 00:03:40.352 01:29:24 -- pm/common@21 -- # date +%s 00:03:40.352 01:29:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732152564 00:03:40.352 01:29:24 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732152564 00:03:40.352 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732152564_collect-cpu-load.pm.log 00:03:40.352 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732152564_collect-vmstat.pm.log 00:03:41.295 01:29:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:41.295 01:29:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:41.295 01:29:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:41.295 01:29:25 -- common/autotest_common.sh@10 -- # set +x 00:03:41.295 01:29:25 -- spdk/autotest.sh@59 -- # create_test_list 00:03:41.295 01:29:25 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:41.295 01:29:25 -- common/autotest_common.sh@10 -- # set +x 00:03:41.295 01:29:25 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:41.295 01:29:25 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:41.295 01:29:25 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:41.295 01:29:25 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:41.295 01:29:25 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:41.295 01:29:25 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:41.295 01:29:25 -- common/autotest_common.sh@1457 -- # uname 00:03:41.295 01:29:25 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:41.295 01:29:25 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:41.295 01:29:25 -- common/autotest_common.sh@1477 -- # uname 00:03:41.295 01:29:25 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:41.295 01:29:25 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:41.295 01:29:25 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:41.555 lcov: LCOV version 1.15 00:03:41.555 01:29:25 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:56.465 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:56.465 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:11.376 01:29:55 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:11.376 01:29:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:11.376 01:29:55 -- common/autotest_common.sh@10 -- # set +x 00:04:11.376 01:29:55 -- spdk/autotest.sh@78 -- # rm -f 00:04:11.376 01:29:55 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.637 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.209 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:12.209 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:12.209 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:12.209 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:12.209 01:29:55 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:12.209 01:29:55 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:12.209 01:29:55 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:12.209 01:29:55 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:12.209 01:29:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:12.209 01:29:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:12.209 01:29:55 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:12.209 01:29:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:12.209 01:29:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:12.209 01:29:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:12.209 01:29:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:12.209 01:29:55 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:12.209 01:29:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:12.209 01:29:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:12.209 01:29:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:12.209 01:29:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:12.209 01:29:55 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:12.209 01:29:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:12.209 01:29:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:12.209 01:29:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:12.209 01:29:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:04:12.209 01:29:55 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:12.209 01:29:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:12.209 01:29:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:12.209 01:29:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:12.209 01:29:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:04:12.209 01:29:55 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:12.209 01:29:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:12.209 01:29:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:12.209 01:29:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:12.209 01:29:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:04:12.209 01:29:55 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:12.209 01:29:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:12.209 01:29:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:12.209 01:29:55 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:12.209 01:29:55 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:12.209 01:29:55 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:12.209 01:29:55 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:12.209 01:29:55 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:12.209 01:29:55 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:12.209 01:29:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.209 01:29:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.209 01:29:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:12.209 01:29:55 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:12.209 01:29:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:12.209 No valid GPT data, bailing 00:04:12.209 01:29:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:12.209 01:29:56 -- scripts/common.sh@394 -- # pt= 00:04:12.209 01:29:56 -- scripts/common.sh@395 -- # return 1 00:04:12.209 01:29:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:12.209 1+0 records in 00:04:12.209 1+0 records out 00:04:12.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00951814 s, 110 MB/s 00:04:12.209 01:29:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.209 01:29:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.209 01:29:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:12.209 01:29:56 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:12.209 01:29:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:12.209 No valid GPT data, bailing 00:04:12.209 01:29:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:12.209 01:29:56 -- scripts/common.sh@394 -- # pt= 00:04:12.209 01:29:56 -- scripts/common.sh@395 -- # return 1 00:04:12.209 01:29:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:12.209 1+0 records in 00:04:12.209 1+0 records out 00:04:12.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431604 s, 243 MB/s 00:04:12.209 01:29:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.209 01:29:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.209 01:29:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:12.209 01:29:56 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:12.209 01:29:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:12.209 No valid GPT data, bailing 00:04:12.209 01:29:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:12.209 01:29:56 -- scripts/common.sh@394 -- # pt= 00:04:12.209 01:29:56 -- scripts/common.sh@395 -- # return 1 00:04:12.209 01:29:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:12.209 1+0 records in 00:04:12.209 1+0 records out 00:04:12.209 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00531808 s, 197 MB/s 00:04:12.209 01:29:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.209 01:29:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.209 01:29:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:12.209 01:29:56 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:12.209 01:29:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:12.494 No valid GPT data, bailing 00:04:12.494 01:29:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:12.494 01:29:56 -- scripts/common.sh@394 -- # pt= 00:04:12.494 01:29:56 -- scripts/common.sh@395 -- # return 1 00:04:12.494 01:29:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:12.494 1+0 records in 00:04:12.494 1+0 records out 00:04:12.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00634627 s, 165 MB/s 00:04:12.494 01:29:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.494 01:29:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.494 01:29:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:12.494 01:29:56 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:12.494 01:29:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:12.494 No valid GPT data, bailing 00:04:12.494 01:29:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:12.494 01:29:56 -- scripts/common.sh@394 -- # pt= 00:04:12.494 01:29:56 -- scripts/common.sh@395 -- # return 1 00:04:12.494 01:29:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:12.494 1+0 records in 00:04:12.494 1+0 records out 00:04:12.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00351129 s, 299 MB/s 00:04:12.494 01:29:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.494 01:29:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:12.494 01:29:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:12.494 01:29:56 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:12.494 01:29:56 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:12.494 No valid GPT data, bailing 00:04:12.494 01:29:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:12.494 01:29:56 -- scripts/common.sh@394 -- # pt= 00:04:12.495 01:29:56 -- scripts/common.sh@395 -- # return 1 00:04:12.495 01:29:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:12.495 1+0 records in 00:04:12.495 1+0 records out 00:04:12.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00386485 s, 271 MB/s 00:04:12.495 01:29:56 -- spdk/autotest.sh@105 -- # sync 00:04:12.755 01:29:56 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:12.755 01:29:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:12.755 01:29:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:14.138 01:29:58 -- spdk/autotest.sh@111 -- # uname -s 00:04:14.138 01:29:58 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:14.138 01:29:58 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:14.138 01:29:58 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:14.708 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.968 Hugepages 00:04:14.968 node hugesize free / total 00:04:14.969 node0 1048576kB 0 / 0 00:04:14.969 node0 2048kB 0 / 0 00:04:14.969 00:04:14.969 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:14.969 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:14.969 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:15.229 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:15.229 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:15.229 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:15.229 01:29:59 -- spdk/autotest.sh@117 -- # uname -s 00:04:15.229 01:29:59 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:15.229 01:29:59 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:15.229 01:29:59 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:15.802 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.061 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.061 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.061 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.321 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:16.321 01:30:00 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:17.262 01:30:01 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:17.262 01:30:01 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:17.262 01:30:01 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:17.262 01:30:01 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:17.262 01:30:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:17.262 01:30:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:17.262 01:30:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:17.262 01:30:01 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:17.262 01:30:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:17.262 01:30:01 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:17.262 01:30:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:17.262 01:30:01 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.523 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.784 Waiting for block devices as requested 00:04:17.784 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:17.784 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:17.784 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:18.044 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:23.334 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:23.334 01:30:06 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:23.334 01:30:06 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:23.334 01:30:06 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:23.334 01:30:06 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:23.334 01:30:06 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:23.334 01:30:06 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:23.334 01:30:06 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:23.334 01:30:06 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:23.334 01:30:06 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:23.334 01:30:06 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:23.334 01:30:06 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:23.334 01:30:06 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:23.334 01:30:06 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:23.334 01:30:06 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:23.334 01:30:06 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:23.334 01:30:06 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:23.334 01:30:06 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:23.334 01:30:06 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:23.334 01:30:06 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:23.334 01:30:06 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:23.334 01:30:06 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:23.334 01:30:06 -- common/autotest_common.sh@1543 -- # continue 00:04:23.334 01:30:06 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:23.334 01:30:06 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:23.334 01:30:06 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:23.334 01:30:06 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:23.334 01:30:06 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:23.334 01:30:06 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:23.334 01:30:06 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:23.334 01:30:06 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:23.334 01:30:06 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:23.334 01:30:06 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:23.334 01:30:06 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:23.334 01:30:06 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:23.334 01:30:06 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:23.334 01:30:06 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:23.334 01:30:06 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:23.334 01:30:06 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:23.334 01:30:06 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:23.334 01:30:06 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:23.334 01:30:06 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:23.334 01:30:06 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:23.334 01:30:06 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:23.334 01:30:06 -- common/autotest_common.sh@1543 -- # continue 00:04:23.334 01:30:06 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:23.334 01:30:06 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:23.334 01:30:06 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:23.334 01:30:06 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:23.334 01:30:06 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:23.334 01:30:06 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:23.334 01:30:06 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:23.334 01:30:06 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:23.334 01:30:06 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:23.334 01:30:06 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:23.334 01:30:06 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:23.334 01:30:06 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:23.334 01:30:06 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:23.334 01:30:06 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:23.335 01:30:06 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:23.335 01:30:06 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:23.335 01:30:06 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:23.335 01:30:06 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:23.335 01:30:06 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:23.335 01:30:06 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:23.335 01:30:06 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:23.335 01:30:06 -- common/autotest_common.sh@1543 -- # continue 00:04:23.335 01:30:06 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:23.335 01:30:06 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:23.335 01:30:06 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:23.335 01:30:06 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:23.335 01:30:06 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:23.335 01:30:06 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:23.335 01:30:06 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:23.335 01:30:06 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:23.335 01:30:06 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:23.335 01:30:06 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:23.335 01:30:06 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:23.335 01:30:06 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:23.335 01:30:06 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:23.335 01:30:06 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:23.335 01:30:06 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:23.335 01:30:06 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:23.335 01:30:06 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:23.335 01:30:06 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:23.335 01:30:06 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:23.335 01:30:06 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:23.335 01:30:06 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:23.335 01:30:06 -- common/autotest_common.sh@1543 -- # continue 00:04:23.335 01:30:06 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:23.335 01:30:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:23.335 01:30:06 -- common/autotest_common.sh@10 -- # set +x 00:04:23.335 01:30:07 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:23.335 01:30:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.335 01:30:07 -- common/autotest_common.sh@10 -- # set +x 00:04:23.335 01:30:07 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:23.596 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.170 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.170 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.170 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.431 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.431 01:30:08 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:24.431 01:30:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:24.431 01:30:08 -- common/autotest_common.sh@10 -- # set +x 00:04:24.431 01:30:08 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:24.431 01:30:08 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:24.432 01:30:08 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:24.432 01:30:08 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:24.432 01:30:08 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:24.432 01:30:08 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:24.432 01:30:08 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:24.432 01:30:08 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:24.432 01:30:08 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:24.432 01:30:08 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:24.432 01:30:08 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:24.432 01:30:08 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:24.432 01:30:08 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:24.432 01:30:08 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:24.432 01:30:08 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:24.432 01:30:08 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:24.432 01:30:08 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:24.432 01:30:08 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:24.432 01:30:08 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:24.432 01:30:08 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:24.432 01:30:08 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:24.432 01:30:08 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:24.432 01:30:08 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:24.432 01:30:08 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:24.432 01:30:08 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:24.432 01:30:08 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:24.432 01:30:08 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:24.432 01:30:08 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:24.432 01:30:08 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:24.432 01:30:08 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:24.432 01:30:08 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:24.432 01:30:08 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:24.432 01:30:08 -- common/autotest_common.sh@1572 -- # return 0 00:04:24.432 01:30:08 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:24.432 01:30:08 -- common/autotest_common.sh@1580 -- # return 0 00:04:24.432 01:30:08 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:24.432 01:30:08 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:24.432 01:30:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:24.432 01:30:08 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:24.432 01:30:08 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:24.432 01:30:08 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.432 01:30:08 -- common/autotest_common.sh@10 -- # set +x 00:04:24.432 01:30:08 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:24.432 01:30:08 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:24.432 01:30:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.432 01:30:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.432 01:30:08 -- common/autotest_common.sh@10 -- # set +x 00:04:24.432 ************************************ 00:04:24.432 START TEST env 00:04:24.432 ************************************ 00:04:24.432 01:30:08 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:24.432 * Looking for test storage... 00:04:24.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:24.432 01:30:08 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:24.432 01:30:08 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:24.432 01:30:08 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:24.693 01:30:08 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:24.693 01:30:08 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.693 01:30:08 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.693 01:30:08 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.693 01:30:08 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.694 01:30:08 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.694 01:30:08 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.694 01:30:08 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.694 01:30:08 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.694 01:30:08 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.694 01:30:08 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.694 01:30:08 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.694 01:30:08 env -- scripts/common.sh@344 -- # case "$op" in 00:04:24.694 01:30:08 env -- scripts/common.sh@345 -- # : 1 00:04:24.694 01:30:08 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.694 01:30:08 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.694 01:30:08 env -- scripts/common.sh@365 -- # decimal 1 00:04:24.694 01:30:08 env -- scripts/common.sh@353 -- # local d=1 00:04:24.694 01:30:08 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.694 01:30:08 env -- scripts/common.sh@355 -- # echo 1 00:04:24.694 01:30:08 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.694 01:30:08 env -- scripts/common.sh@366 -- # decimal 2 00:04:24.694 01:30:08 env -- scripts/common.sh@353 -- # local d=2 00:04:24.694 01:30:08 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.694 01:30:08 env -- scripts/common.sh@355 -- # echo 2 00:04:24.694 01:30:08 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.694 01:30:08 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.694 01:30:08 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.694 01:30:08 env -- scripts/common.sh@368 -- # return 0 00:04:24.694 01:30:08 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.694 01:30:08 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:24.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.694 --rc genhtml_branch_coverage=1 00:04:24.694 --rc genhtml_function_coverage=1 00:04:24.694 --rc genhtml_legend=1 00:04:24.694 --rc geninfo_all_blocks=1 00:04:24.694 --rc geninfo_unexecuted_blocks=1 00:04:24.694 00:04:24.694 ' 00:04:24.694 01:30:08 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:24.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.694 --rc genhtml_branch_coverage=1 00:04:24.694 --rc genhtml_function_coverage=1 00:04:24.694 --rc genhtml_legend=1 00:04:24.694 --rc geninfo_all_blocks=1 00:04:24.694 --rc geninfo_unexecuted_blocks=1 00:04:24.694 00:04:24.694 ' 00:04:24.694 01:30:08 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:24.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.694 --rc genhtml_branch_coverage=1 00:04:24.694 --rc genhtml_function_coverage=1 00:04:24.694 --rc genhtml_legend=1 00:04:24.694 --rc geninfo_all_blocks=1 00:04:24.694 --rc geninfo_unexecuted_blocks=1 00:04:24.694 00:04:24.694 ' 00:04:24.694 01:30:08 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:24.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.694 --rc genhtml_branch_coverage=1 00:04:24.694 --rc genhtml_function_coverage=1 00:04:24.694 --rc genhtml_legend=1 00:04:24.694 --rc geninfo_all_blocks=1 00:04:24.694 --rc geninfo_unexecuted_blocks=1 00:04:24.694 00:04:24.694 ' 00:04:24.694 01:30:08 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:24.694 01:30:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.694 01:30:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.694 01:30:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.694 ************************************ 00:04:24.694 START TEST env_memory 00:04:24.694 ************************************ 00:04:24.694 01:30:08 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:24.694 00:04:24.694 00:04:24.694 CUnit - A unit testing framework for C - Version 2.1-3 00:04:24.694 http://cunit.sourceforge.net/ 00:04:24.694 00:04:24.694 00:04:24.694 Suite: memory 00:04:24.694 Test: alloc and free memory map ...[2024-11-21 01:30:08.509715] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:24.694 passed 00:04:24.694 Test: mem map translation ...[2024-11-21 01:30:08.548475] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:24.694 [2024-11-21 01:30:08.548591] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:24.694 [2024-11-21 01:30:08.548720] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:24.694 [2024-11-21 01:30:08.548794] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:24.694 passed 00:04:24.694 Test: mem map registration ...[2024-11-21 01:30:08.616922] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:24.694 [2024-11-21 01:30:08.616963] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:24.694 passed 00:04:24.955 Test: mem map adjacent registrations ...passed 00:04:24.955 00:04:24.955 Run Summary: Type Total Ran Passed Failed Inactive 00:04:24.955 suites 1 1 n/a 0 0 00:04:24.955 tests 4 4 4 0 0 00:04:24.955 asserts 152 152 152 0 n/a 00:04:24.955 00:04:24.955 Elapsed time = 0.232 seconds 00:04:24.955 00:04:24.955 real 0m0.270s 00:04:24.955 user 0m0.241s 00:04:24.955 sys 0m0.019s 00:04:24.955 ************************************ 00:04:24.955 END TEST env_memory 00:04:24.955 ************************************ 00:04:24.955 01:30:08 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.955 01:30:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:24.955 01:30:08 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:24.955 01:30:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.955 01:30:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.955 01:30:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:24.955 ************************************ 00:04:24.955 START TEST env_vtophys 00:04:24.955 ************************************ 00:04:24.955 01:30:08 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:24.955 EAL: lib.eal log level changed from notice to debug 00:04:24.955 EAL: Detected lcore 0 as core 0 on socket 0 00:04:24.955 EAL: Detected lcore 1 as core 0 on socket 0 00:04:24.955 EAL: Detected lcore 2 as core 0 on socket 0 00:04:24.955 EAL: Detected lcore 3 as core 0 on socket 0 00:04:24.955 EAL: Detected lcore 4 as core 0 on socket 0 00:04:24.955 EAL: Detected lcore 5 as core 0 on socket 0 00:04:24.955 EAL: Detected lcore 6 as core 0 on socket 0 00:04:24.955 EAL: Detected lcore 7 as core 0 on socket 0 00:04:24.955 EAL: Detected lcore 8 as core 0 on socket 0 00:04:24.955 EAL: Detected lcore 9 as core 0 on socket 0 00:04:24.955 EAL: Maximum logical cores by configuration: 128 00:04:24.955 EAL: Detected CPU lcores: 10 00:04:24.955 EAL: Detected NUMA nodes: 1 00:04:24.955 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:24.955 EAL: Detected shared linkage of DPDK 00:04:24.955 EAL: No shared files mode enabled, IPC will be disabled 00:04:24.955 EAL: Selected IOVA mode 'PA' 00:04:24.955 EAL: Probing VFIO support... 00:04:24.956 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:24.956 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:24.956 EAL: Ask a virtual area of 0x2e000 bytes 00:04:24.956 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:24.956 EAL: Setting up physically contiguous memory... 00:04:24.956 EAL: Setting maximum number of open files to 524288 00:04:24.956 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:24.956 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:24.956 EAL: Ask a virtual area of 0x61000 bytes 00:04:24.956 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:24.956 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:24.956 EAL: Ask a virtual area of 0x400000000 bytes 00:04:24.956 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:24.956 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:24.956 EAL: Ask a virtual area of 0x61000 bytes 00:04:24.956 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:24.956 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:24.956 EAL: Ask a virtual area of 0x400000000 bytes 00:04:24.956 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:24.956 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:24.956 EAL: Ask a virtual area of 0x61000 bytes 00:04:24.956 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:24.956 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:24.956 EAL: Ask a virtual area of 0x400000000 bytes 00:04:24.956 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:24.956 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:24.956 EAL: Ask a virtual area of 0x61000 bytes 00:04:24.956 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:24.956 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:24.956 EAL: Ask a virtual area of 0x400000000 bytes 00:04:24.956 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:24.956 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:24.956 EAL: Hugepages will be freed exactly as allocated. 00:04:24.956 EAL: No shared files mode enabled, IPC is disabled 00:04:24.956 EAL: No shared files mode enabled, IPC is disabled 00:04:25.216 EAL: TSC frequency is ~2600000 KHz 00:04:25.216 EAL: Main lcore 0 is ready (tid=7f3245727a40;cpuset=[0]) 00:04:25.216 EAL: Trying to obtain current memory policy. 00:04:25.216 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.216 EAL: Restoring previous memory policy: 0 00:04:25.216 EAL: request: mp_malloc_sync 00:04:25.216 EAL: No shared files mode enabled, IPC is disabled 00:04:25.216 EAL: Heap on socket 0 was expanded by 2MB 00:04:25.216 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:25.216 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:25.216 EAL: Mem event callback 'spdk:(nil)' registered 00:04:25.216 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:25.216 00:04:25.216 00:04:25.216 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.216 http://cunit.sourceforge.net/ 00:04:25.216 00:04:25.216 00:04:25.216 Suite: components_suite 00:04:25.474 Test: vtophys_malloc_test ...passed 00:04:25.474 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:25.474 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.474 EAL: Restoring previous memory policy: 4 00:04:25.475 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.475 EAL: request: mp_malloc_sync 00:04:25.475 EAL: No shared files mode enabled, IPC is disabled 00:04:25.475 EAL: Heap on socket 0 was expanded by 4MB 00:04:25.475 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.475 EAL: request: mp_malloc_sync 00:04:25.475 EAL: No shared files mode enabled, IPC is disabled 00:04:25.475 EAL: Heap on socket 0 was shrunk by 4MB 00:04:25.475 EAL: Trying to obtain current memory policy. 00:04:25.475 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.475 EAL: Restoring previous memory policy: 4 00:04:25.475 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.475 EAL: request: mp_malloc_sync 00:04:25.475 EAL: No shared files mode enabled, IPC is disabled 00:04:25.475 EAL: Heap on socket 0 was expanded by 6MB 00:04:25.475 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.475 EAL: request: mp_malloc_sync 00:04:25.475 EAL: No shared files mode enabled, IPC is disabled 00:04:25.475 EAL: Heap on socket 0 was shrunk by 6MB 00:04:25.475 EAL: Trying to obtain current memory policy. 00:04:25.475 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.475 EAL: Restoring previous memory policy: 4 00:04:25.475 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.475 EAL: request: mp_malloc_sync 00:04:25.475 EAL: No shared files mode enabled, IPC is disabled 00:04:25.475 EAL: Heap on socket 0 was expanded by 10MB 00:04:25.475 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.475 EAL: request: mp_malloc_sync 00:04:25.475 EAL: No shared files mode enabled, IPC is disabled 00:04:25.475 EAL: Heap on socket 0 was shrunk by 10MB 00:04:25.475 EAL: Trying to obtain current memory policy. 00:04:25.475 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.475 EAL: Restoring previous memory policy: 4 00:04:25.475 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.475 EAL: request: mp_malloc_sync 00:04:25.475 EAL: No shared files mode enabled, IPC is disabled 00:04:25.475 EAL: Heap on socket 0 was expanded by 18MB 00:04:25.475 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.475 EAL: request: mp_malloc_sync 00:04:25.475 EAL: No shared files mode enabled, IPC is disabled 00:04:25.475 EAL: Heap on socket 0 was shrunk by 18MB 00:04:25.475 EAL: Trying to obtain current memory policy. 00:04:25.475 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.475 EAL: Restoring previous memory policy: 4 00:04:25.475 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.475 EAL: request: mp_malloc_sync 00:04:25.475 EAL: No shared files mode enabled, IPC is disabled 00:04:25.475 EAL: Heap on socket 0 was expanded by 34MB 00:04:25.475 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.475 EAL: request: mp_malloc_sync 00:04:25.475 EAL: No shared files mode enabled, IPC is disabled 00:04:25.475 EAL: Heap on socket 0 was shrunk by 34MB 00:04:25.475 EAL: Trying to obtain current memory policy. 00:04:25.475 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.475 EAL: Restoring previous memory policy: 4 00:04:25.475 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.475 EAL: request: mp_malloc_sync 00:04:25.475 EAL: No shared files mode enabled, IPC is disabled 00:04:25.475 EAL: Heap on socket 0 was expanded by 66MB 00:04:25.735 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.735 EAL: request: mp_malloc_sync 00:04:25.735 EAL: No shared files mode enabled, IPC is disabled 00:04:25.735 EAL: Heap on socket 0 was shrunk by 66MB 00:04:25.735 EAL: Trying to obtain current memory policy. 00:04:25.735 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.735 EAL: Restoring previous memory policy: 4 00:04:25.735 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.735 EAL: request: mp_malloc_sync 00:04:25.735 EAL: No shared files mode enabled, IPC is disabled 00:04:25.735 EAL: Heap on socket 0 was expanded by 130MB 00:04:25.996 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.996 EAL: request: mp_malloc_sync 00:04:25.996 EAL: No shared files mode enabled, IPC is disabled 00:04:25.996 EAL: Heap on socket 0 was shrunk by 130MB 00:04:25.996 EAL: Trying to obtain current memory policy. 00:04:25.996 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.996 EAL: Restoring previous memory policy: 4 00:04:25.996 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.996 EAL: request: mp_malloc_sync 00:04:25.996 EAL: No shared files mode enabled, IPC is disabled 00:04:25.996 EAL: Heap on socket 0 was expanded by 258MB 00:04:26.569 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.569 EAL: request: mp_malloc_sync 00:04:26.569 EAL: No shared files mode enabled, IPC is disabled 00:04:26.569 EAL: Heap on socket 0 was shrunk by 258MB 00:04:26.830 EAL: Trying to obtain current memory policy. 00:04:26.830 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.830 EAL: Restoring previous memory policy: 4 00:04:26.830 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.830 EAL: request: mp_malloc_sync 00:04:26.830 EAL: No shared files mode enabled, IPC is disabled 00:04:26.830 EAL: Heap on socket 0 was expanded by 514MB 00:04:27.402 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.402 EAL: request: mp_malloc_sync 00:04:27.402 EAL: No shared files mode enabled, IPC is disabled 00:04:27.402 EAL: Heap on socket 0 was shrunk by 514MB 00:04:27.974 EAL: Trying to obtain current memory policy. 00:04:27.974 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.234 EAL: Restoring previous memory policy: 4 00:04:28.234 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.234 EAL: request: mp_malloc_sync 00:04:28.234 EAL: No shared files mode enabled, IPC is disabled 00:04:28.234 EAL: Heap on socket 0 was expanded by 1026MB 00:04:29.176 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.176 EAL: request: mp_malloc_sync 00:04:29.176 EAL: No shared files mode enabled, IPC is disabled 00:04:29.176 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:30.119 passed 00:04:30.120 00:04:30.120 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.120 suites 1 1 n/a 0 0 00:04:30.120 tests 2 2 2 0 0 00:04:30.120 asserts 5768 5768 5768 0 n/a 00:04:30.120 00:04:30.120 Elapsed time = 4.715 seconds 00:04:30.120 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.120 EAL: request: mp_malloc_sync 00:04:30.120 EAL: No shared files mode enabled, IPC is disabled 00:04:30.120 EAL: Heap on socket 0 was shrunk by 2MB 00:04:30.120 EAL: No shared files mode enabled, IPC is disabled 00:04:30.120 EAL: No shared files mode enabled, IPC is disabled 00:04:30.120 EAL: No shared files mode enabled, IPC is disabled 00:04:30.120 ************************************ 00:04:30.120 END TEST env_vtophys 00:04:30.120 ************************************ 00:04:30.120 00:04:30.120 real 0m4.971s 00:04:30.120 user 0m4.125s 00:04:30.120 sys 0m0.697s 00:04:30.120 01:30:13 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.120 01:30:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:30.120 01:30:13 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:30.120 01:30:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.120 01:30:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.120 01:30:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.120 ************************************ 00:04:30.120 START TEST env_pci 00:04:30.120 ************************************ 00:04:30.120 01:30:13 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:30.120 00:04:30.120 00:04:30.120 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.120 http://cunit.sourceforge.net/ 00:04:30.120 00:04:30.120 00:04:30.120 Suite: pci 00:04:30.120 Test: pci_hook ...[2024-11-21 01:30:13.828940] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56973 has claimed it 00:04:30.120 EAL: Cannot find device (10000:00:01.0) 00:04:30.120 EAL: Failed to attach device on primary process 00:04:30.120 passed 00:04:30.120 00:04:30.120 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.120 suites 1 1 n/a 0 0 00:04:30.120 tests 1 1 1 0 0 00:04:30.120 asserts 25 25 25 0 n/a 00:04:30.120 00:04:30.120 Elapsed time = 0.006 seconds 00:04:30.120 00:04:30.120 real 0m0.069s 00:04:30.120 user 0m0.033s 00:04:30.120 sys 0m0.034s 00:04:30.120 ************************************ 00:04:30.120 END TEST env_pci 00:04:30.120 ************************************ 00:04:30.120 01:30:13 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.120 01:30:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:30.120 01:30:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:30.120 01:30:13 env -- env/env.sh@15 -- # uname 00:04:30.120 01:30:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:30.120 01:30:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:30.120 01:30:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:30.120 01:30:13 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:30.120 01:30:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.120 01:30:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.120 ************************************ 00:04:30.120 START TEST env_dpdk_post_init 00:04:30.120 ************************************ 00:04:30.120 01:30:13 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:30.120 EAL: Detected CPU lcores: 10 00:04:30.120 EAL: Detected NUMA nodes: 1 00:04:30.120 EAL: Detected shared linkage of DPDK 00:04:30.120 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:30.120 EAL: Selected IOVA mode 'PA' 00:04:30.381 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:30.381 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:30.381 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:30.381 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:30.381 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:30.381 Starting DPDK initialization... 00:04:30.381 Starting SPDK post initialization... 00:04:30.381 SPDK NVMe probe 00:04:30.381 Attaching to 0000:00:10.0 00:04:30.381 Attaching to 0000:00:11.0 00:04:30.381 Attaching to 0000:00:12.0 00:04:30.381 Attaching to 0000:00:13.0 00:04:30.381 Attached to 0000:00:10.0 00:04:30.381 Attached to 0000:00:11.0 00:04:30.381 Attached to 0000:00:13.0 00:04:30.381 Attached to 0000:00:12.0 00:04:30.381 Cleaning up... 00:04:30.381 00:04:30.381 real 0m0.260s 00:04:30.381 user 0m0.080s 00:04:30.381 sys 0m0.082s 00:04:30.381 01:30:14 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.381 ************************************ 00:04:30.381 END TEST env_dpdk_post_init 00:04:30.381 ************************************ 00:04:30.381 01:30:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:30.381 01:30:14 env -- env/env.sh@26 -- # uname 00:04:30.381 01:30:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:30.381 01:30:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:30.381 01:30:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.381 01:30:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.381 01:30:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.381 ************************************ 00:04:30.381 START TEST env_mem_callbacks 00:04:30.381 ************************************ 00:04:30.381 01:30:14 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:30.381 EAL: Detected CPU lcores: 10 00:04:30.381 EAL: Detected NUMA nodes: 1 00:04:30.381 EAL: Detected shared linkage of DPDK 00:04:30.381 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:30.381 EAL: Selected IOVA mode 'PA' 00:04:30.643 00:04:30.643 00:04:30.643 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.643 http://cunit.sourceforge.net/ 00:04:30.643 00:04:30.643 00:04:30.643 Suite: memory 00:04:30.643 Test: test ... 00:04:30.643 register 0x200000200000 2097152 00:04:30.643 malloc 3145728 00:04:30.643 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:30.643 register 0x200000400000 4194304 00:04:30.643 buf 0x2000004fffc0 len 3145728 PASSED 00:04:30.643 malloc 64 00:04:30.643 buf 0x2000004ffec0 len 64 PASSED 00:04:30.643 malloc 4194304 00:04:30.643 register 0x200000800000 6291456 00:04:30.643 buf 0x2000009fffc0 len 4194304 PASSED 00:04:30.643 free 0x2000004fffc0 3145728 00:04:30.643 free 0x2000004ffec0 64 00:04:30.643 unregister 0x200000400000 4194304 PASSED 00:04:30.643 free 0x2000009fffc0 4194304 00:04:30.643 unregister 0x200000800000 6291456 PASSED 00:04:30.643 malloc 8388608 00:04:30.643 register 0x200000400000 10485760 00:04:30.643 buf 0x2000005fffc0 len 8388608 PASSED 00:04:30.643 free 0x2000005fffc0 8388608 00:04:30.643 unregister 0x200000400000 10485760 PASSED 00:04:30.643 passed 00:04:30.643 00:04:30.643 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.643 suites 1 1 n/a 0 0 00:04:30.643 tests 1 1 1 0 0 00:04:30.643 asserts 15 15 15 0 n/a 00:04:30.643 00:04:30.643 Elapsed time = 0.050 seconds 00:04:30.643 00:04:30.643 real 0m0.226s 00:04:30.643 user 0m0.064s 00:04:30.643 sys 0m0.058s 00:04:30.643 01:30:14 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.643 01:30:14 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:30.643 ************************************ 00:04:30.643 END TEST env_mem_callbacks 00:04:30.643 ************************************ 00:04:30.643 00:04:30.643 real 0m6.199s 00:04:30.643 user 0m4.705s 00:04:30.643 sys 0m1.090s 00:04:30.643 01:30:14 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.643 01:30:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.643 ************************************ 00:04:30.643 END TEST env 00:04:30.643 ************************************ 00:04:30.643 01:30:14 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:30.643 01:30:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.643 01:30:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.643 01:30:14 -- common/autotest_common.sh@10 -- # set +x 00:04:30.643 ************************************ 00:04:30.643 START TEST rpc 00:04:30.643 ************************************ 00:04:30.643 01:30:14 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:30.905 * Looking for test storage... 00:04:30.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:30.905 01:30:14 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.905 01:30:14 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.905 01:30:14 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.905 01:30:14 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.905 01:30:14 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.905 01:30:14 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.905 01:30:14 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.905 01:30:14 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.905 01:30:14 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.905 01:30:14 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.905 01:30:14 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.905 01:30:14 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.905 01:30:14 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.905 01:30:14 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.905 01:30:14 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.905 01:30:14 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:30.905 01:30:14 rpc -- scripts/common.sh@345 -- # : 1 00:04:30.905 01:30:14 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.905 01:30:14 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.905 01:30:14 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:30.905 01:30:14 rpc -- scripts/common.sh@353 -- # local d=1 00:04:30.905 01:30:14 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.905 01:30:14 rpc -- scripts/common.sh@355 -- # echo 1 00:04:30.905 01:30:14 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.905 01:30:14 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:30.905 01:30:14 rpc -- scripts/common.sh@353 -- # local d=2 00:04:30.905 01:30:14 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.905 01:30:14 rpc -- scripts/common.sh@355 -- # echo 2 00:04:30.905 01:30:14 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.905 01:30:14 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.905 01:30:14 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.905 01:30:14 rpc -- scripts/common.sh@368 -- # return 0 00:04:30.905 01:30:14 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.905 01:30:14 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.905 --rc genhtml_branch_coverage=1 00:04:30.905 --rc genhtml_function_coverage=1 00:04:30.905 --rc genhtml_legend=1 00:04:30.905 --rc geninfo_all_blocks=1 00:04:30.905 --rc geninfo_unexecuted_blocks=1 00:04:30.905 00:04:30.905 ' 00:04:30.905 01:30:14 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.905 --rc genhtml_branch_coverage=1 00:04:30.905 --rc genhtml_function_coverage=1 00:04:30.905 --rc genhtml_legend=1 00:04:30.905 --rc geninfo_all_blocks=1 00:04:30.905 --rc geninfo_unexecuted_blocks=1 00:04:30.905 00:04:30.905 ' 00:04:30.905 01:30:14 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.905 --rc genhtml_branch_coverage=1 00:04:30.905 --rc genhtml_function_coverage=1 00:04:30.905 --rc genhtml_legend=1 00:04:30.905 --rc geninfo_all_blocks=1 00:04:30.905 --rc geninfo_unexecuted_blocks=1 00:04:30.905 00:04:30.905 ' 00:04:30.905 01:30:14 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.905 --rc genhtml_branch_coverage=1 00:04:30.905 --rc genhtml_function_coverage=1 00:04:30.905 --rc genhtml_legend=1 00:04:30.905 --rc geninfo_all_blocks=1 00:04:30.905 --rc geninfo_unexecuted_blocks=1 00:04:30.905 00:04:30.905 ' 00:04:30.905 01:30:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57100 00:04:30.905 01:30:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.905 01:30:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57100 00:04:30.905 01:30:14 rpc -- common/autotest_common.sh@835 -- # '[' -z 57100 ']' 00:04:30.905 01:30:14 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.905 01:30:14 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.905 01:30:14 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.905 01:30:14 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.905 01:30:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.905 01:30:14 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:30.905 [2024-11-21 01:30:14.782745] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:04:30.905 [2024-11-21 01:30:14.782876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57100 ] 00:04:31.166 [2024-11-21 01:30:14.940678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.166 [2024-11-21 01:30:15.053223] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:31.167 [2024-11-21 01:30:15.053295] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57100' to capture a snapshot of events at runtime. 00:04:31.167 [2024-11-21 01:30:15.053307] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:31.167 [2024-11-21 01:30:15.053318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:31.167 [2024-11-21 01:30:15.053326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57100 for offline analysis/debug. 00:04:31.167 [2024-11-21 01:30:15.054263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.137 01:30:15 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.137 01:30:15 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:32.137 01:30:15 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:32.137 01:30:15 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:32.137 01:30:15 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:32.137 01:30:15 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:32.137 01:30:15 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.137 01:30:15 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.137 01:30:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.137 ************************************ 00:04:32.137 START TEST rpc_integrity 00:04:32.137 ************************************ 00:04:32.137 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:32.138 01:30:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.138 01:30:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:32.138 01:30:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:32.138 01:30:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:32.138 01:30:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.138 01:30:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:32.138 01:30:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.138 01:30:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:32.138 { 00:04:32.138 "name": "Malloc0", 00:04:32.138 "aliases": [ 00:04:32.138 "07bf8460-968f-4754-aa15-19487891dc03" 00:04:32.138 ], 00:04:32.138 "product_name": "Malloc disk", 00:04:32.138 "block_size": 512, 00:04:32.138 "num_blocks": 16384, 00:04:32.138 "uuid": "07bf8460-968f-4754-aa15-19487891dc03", 00:04:32.138 "assigned_rate_limits": { 00:04:32.138 "rw_ios_per_sec": 0, 00:04:32.138 "rw_mbytes_per_sec": 0, 00:04:32.138 "r_mbytes_per_sec": 0, 00:04:32.138 "w_mbytes_per_sec": 0 00:04:32.138 }, 00:04:32.138 "claimed": false, 00:04:32.138 "zoned": false, 00:04:32.138 "supported_io_types": { 00:04:32.138 "read": true, 00:04:32.138 "write": true, 00:04:32.138 "unmap": true, 00:04:32.138 "flush": true, 00:04:32.138 "reset": true, 00:04:32.138 "nvme_admin": false, 00:04:32.138 "nvme_io": false, 00:04:32.138 "nvme_io_md": false, 00:04:32.138 "write_zeroes": true, 00:04:32.138 "zcopy": true, 00:04:32.138 "get_zone_info": false, 00:04:32.138 "zone_management": false, 00:04:32.138 "zone_append": false, 00:04:32.138 "compare": false, 00:04:32.138 "compare_and_write": false, 00:04:32.138 "abort": true, 00:04:32.138 "seek_hole": false, 00:04:32.138 "seek_data": false, 00:04:32.138 "copy": true, 00:04:32.138 "nvme_iov_md": false 00:04:32.138 }, 00:04:32.138 "memory_domains": [ 00:04:32.138 { 00:04:32.138 "dma_device_id": "system", 00:04:32.138 "dma_device_type": 1 00:04:32.138 }, 00:04:32.138 { 00:04:32.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.138 "dma_device_type": 2 00:04:32.138 } 00:04:32.138 ], 00:04:32.138 "driver_specific": {} 00:04:32.138 } 00:04:32.138 ]' 00:04:32.138 01:30:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:32.138 01:30:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:32.138 01:30:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.138 [2024-11-21 01:30:15.890295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:32.138 [2024-11-21 01:30:15.890397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:32.138 [2024-11-21 01:30:15.890433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:32.138 [2024-11-21 01:30:15.890448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:32.138 [2024-11-21 01:30:15.893322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:32.138 [2024-11-21 01:30:15.893387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:32.138 Passthru0 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.138 01:30:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.138 01:30:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:32.138 { 00:04:32.138 "name": "Malloc0", 00:04:32.138 "aliases": [ 00:04:32.138 "07bf8460-968f-4754-aa15-19487891dc03" 00:04:32.138 ], 00:04:32.138 "product_name": "Malloc disk", 00:04:32.138 "block_size": 512, 00:04:32.138 "num_blocks": 16384, 00:04:32.138 "uuid": "07bf8460-968f-4754-aa15-19487891dc03", 00:04:32.138 "assigned_rate_limits": { 00:04:32.138 "rw_ios_per_sec": 0, 00:04:32.138 "rw_mbytes_per_sec": 0, 00:04:32.138 "r_mbytes_per_sec": 0, 00:04:32.138 "w_mbytes_per_sec": 0 00:04:32.138 }, 00:04:32.138 "claimed": true, 00:04:32.138 "claim_type": "exclusive_write", 00:04:32.138 "zoned": false, 00:04:32.138 "supported_io_types": { 00:04:32.138 "read": true, 00:04:32.138 "write": true, 00:04:32.138 "unmap": true, 00:04:32.138 "flush": true, 00:04:32.138 "reset": true, 00:04:32.138 "nvme_admin": false, 00:04:32.138 "nvme_io": false, 00:04:32.138 "nvme_io_md": false, 00:04:32.138 "write_zeroes": true, 00:04:32.138 "zcopy": true, 00:04:32.138 "get_zone_info": false, 00:04:32.138 "zone_management": false, 00:04:32.138 "zone_append": false, 00:04:32.138 "compare": false, 00:04:32.138 "compare_and_write": false, 00:04:32.138 "abort": true, 00:04:32.138 "seek_hole": false, 00:04:32.138 "seek_data": false, 00:04:32.138 "copy": true, 00:04:32.138 "nvme_iov_md": false 00:04:32.138 }, 00:04:32.138 "memory_domains": [ 00:04:32.138 { 00:04:32.138 "dma_device_id": "system", 00:04:32.138 "dma_device_type": 1 00:04:32.138 }, 00:04:32.138 { 00:04:32.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.138 "dma_device_type": 2 00:04:32.138 } 00:04:32.138 ], 00:04:32.138 "driver_specific": {} 00:04:32.138 }, 00:04:32.138 { 00:04:32.138 "name": "Passthru0", 00:04:32.138 "aliases": [ 00:04:32.138 "68aeccb4-0f48-5465-a9a0-ffc691f58d1a" 00:04:32.138 ], 00:04:32.138 "product_name": "passthru", 00:04:32.138 "block_size": 512, 00:04:32.138 "num_blocks": 16384, 00:04:32.138 "uuid": "68aeccb4-0f48-5465-a9a0-ffc691f58d1a", 00:04:32.138 "assigned_rate_limits": { 00:04:32.138 "rw_ios_per_sec": 0, 00:04:32.138 "rw_mbytes_per_sec": 0, 00:04:32.138 "r_mbytes_per_sec": 0, 00:04:32.138 "w_mbytes_per_sec": 0 00:04:32.138 }, 00:04:32.138 "claimed": false, 00:04:32.138 "zoned": false, 00:04:32.138 "supported_io_types": { 00:04:32.138 "read": true, 00:04:32.138 "write": true, 00:04:32.138 "unmap": true, 00:04:32.138 "flush": true, 00:04:32.138 "reset": true, 00:04:32.138 "nvme_admin": false, 00:04:32.138 "nvme_io": false, 00:04:32.138 "nvme_io_md": false, 00:04:32.138 "write_zeroes": true, 00:04:32.138 "zcopy": true, 00:04:32.138 "get_zone_info": false, 00:04:32.138 "zone_management": false, 00:04:32.138 "zone_append": false, 00:04:32.138 "compare": false, 00:04:32.138 "compare_and_write": false, 00:04:32.138 "abort": true, 00:04:32.138 "seek_hole": false, 00:04:32.138 "seek_data": false, 00:04:32.138 "copy": true, 00:04:32.138 "nvme_iov_md": false 00:04:32.138 }, 00:04:32.138 "memory_domains": [ 00:04:32.138 { 00:04:32.138 "dma_device_id": "system", 00:04:32.138 "dma_device_type": 1 00:04:32.138 }, 00:04:32.138 { 00:04:32.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.138 "dma_device_type": 2 00:04:32.138 } 00:04:32.138 ], 00:04:32.138 "driver_specific": { 00:04:32.138 "passthru": { 00:04:32.138 "name": "Passthru0", 00:04:32.138 "base_bdev_name": "Malloc0" 00:04:32.138 } 00:04:32.138 } 00:04:32.138 } 00:04:32.138 ]' 00:04:32.138 01:30:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:32.138 01:30:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:32.138 01:30:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.138 01:30:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.138 01:30:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.138 01:30:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.138 01:30:16 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.138 01:30:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:32.138 01:30:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:32.138 01:30:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:32.138 00:04:32.138 real 0m0.261s 00:04:32.138 user 0m0.137s 00:04:32.138 sys 0m0.031s 00:04:32.138 ************************************ 00:04:32.138 END TEST rpc_integrity 00:04:32.138 ************************************ 00:04:32.138 01:30:16 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.138 01:30:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.138 01:30:16 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:32.139 01:30:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.139 01:30:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.139 01:30:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.401 ************************************ 00:04:32.401 START TEST rpc_plugins 00:04:32.401 ************************************ 00:04:32.401 01:30:16 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:32.401 01:30:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:32.401 01:30:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.401 01:30:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.401 01:30:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.401 01:30:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:32.401 01:30:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:32.401 01:30:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.401 01:30:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.401 01:30:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.401 01:30:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:32.401 { 00:04:32.401 "name": "Malloc1", 00:04:32.401 "aliases": [ 00:04:32.401 "77ce3843-f9a4-4111-af1f-5e42f9a84f6b" 00:04:32.401 ], 00:04:32.401 "product_name": "Malloc disk", 00:04:32.401 "block_size": 4096, 00:04:32.401 "num_blocks": 256, 00:04:32.401 "uuid": "77ce3843-f9a4-4111-af1f-5e42f9a84f6b", 00:04:32.401 "assigned_rate_limits": { 00:04:32.401 "rw_ios_per_sec": 0, 00:04:32.401 "rw_mbytes_per_sec": 0, 00:04:32.401 "r_mbytes_per_sec": 0, 00:04:32.401 "w_mbytes_per_sec": 0 00:04:32.401 }, 00:04:32.401 "claimed": false, 00:04:32.401 "zoned": false, 00:04:32.401 "supported_io_types": { 00:04:32.401 "read": true, 00:04:32.401 "write": true, 00:04:32.401 "unmap": true, 00:04:32.401 "flush": true, 00:04:32.401 "reset": true, 00:04:32.401 "nvme_admin": false, 00:04:32.401 "nvme_io": false, 00:04:32.401 "nvme_io_md": false, 00:04:32.401 "write_zeroes": true, 00:04:32.401 "zcopy": true, 00:04:32.401 "get_zone_info": false, 00:04:32.401 "zone_management": false, 00:04:32.401 "zone_append": false, 00:04:32.401 "compare": false, 00:04:32.401 "compare_and_write": false, 00:04:32.401 "abort": true, 00:04:32.401 "seek_hole": false, 00:04:32.401 "seek_data": false, 00:04:32.401 "copy": true, 00:04:32.401 "nvme_iov_md": false 00:04:32.401 }, 00:04:32.401 "memory_domains": [ 00:04:32.401 { 00:04:32.401 "dma_device_id": "system", 00:04:32.401 "dma_device_type": 1 00:04:32.401 }, 00:04:32.401 { 00:04:32.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.401 "dma_device_type": 2 00:04:32.401 } 00:04:32.401 ], 00:04:32.401 "driver_specific": {} 00:04:32.401 } 00:04:32.401 ]' 00:04:32.401 01:30:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:32.401 01:30:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:32.401 01:30:16 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:32.401 01:30:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.401 01:30:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.401 01:30:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.401 01:30:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:32.401 01:30:16 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.401 01:30:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.401 01:30:16 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.401 01:30:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:32.401 01:30:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:32.401 01:30:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:32.401 00:04:32.401 real 0m0.123s 00:04:32.401 user 0m0.068s 00:04:32.401 sys 0m0.016s 00:04:32.401 ************************************ 00:04:32.401 END TEST rpc_plugins 00:04:32.401 ************************************ 00:04:32.401 01:30:16 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.401 01:30:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.401 01:30:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:32.401 01:30:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.401 01:30:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.401 01:30:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.401 ************************************ 00:04:32.401 START TEST rpc_trace_cmd_test 00:04:32.401 ************************************ 00:04:32.401 01:30:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:32.401 01:30:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:32.401 01:30:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:32.401 01:30:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.401 01:30:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:32.401 01:30:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.401 01:30:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:32.401 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57100", 00:04:32.401 "tpoint_group_mask": "0x8", 00:04:32.401 "iscsi_conn": { 00:04:32.401 "mask": "0x2", 00:04:32.401 "tpoint_mask": "0x0" 00:04:32.401 }, 00:04:32.401 "scsi": { 00:04:32.401 "mask": "0x4", 00:04:32.401 "tpoint_mask": "0x0" 00:04:32.401 }, 00:04:32.401 "bdev": { 00:04:32.401 "mask": "0x8", 00:04:32.401 "tpoint_mask": "0xffffffffffffffff" 00:04:32.401 }, 00:04:32.401 "nvmf_rdma": { 00:04:32.401 "mask": "0x10", 00:04:32.401 "tpoint_mask": "0x0" 00:04:32.401 }, 00:04:32.401 "nvmf_tcp": { 00:04:32.401 "mask": "0x20", 00:04:32.401 "tpoint_mask": "0x0" 00:04:32.401 }, 00:04:32.401 "ftl": { 00:04:32.401 "mask": "0x40", 00:04:32.401 "tpoint_mask": "0x0" 00:04:32.401 }, 00:04:32.401 "blobfs": { 00:04:32.401 "mask": "0x80", 00:04:32.401 "tpoint_mask": "0x0" 00:04:32.401 }, 00:04:32.401 "dsa": { 00:04:32.402 "mask": "0x200", 00:04:32.402 "tpoint_mask": "0x0" 00:04:32.402 }, 00:04:32.402 "thread": { 00:04:32.402 "mask": "0x400", 00:04:32.402 "tpoint_mask": "0x0" 00:04:32.402 }, 00:04:32.402 "nvme_pcie": { 00:04:32.402 "mask": "0x800", 00:04:32.402 "tpoint_mask": "0x0" 00:04:32.402 }, 00:04:32.402 "iaa": { 00:04:32.402 "mask": "0x1000", 00:04:32.402 "tpoint_mask": "0x0" 00:04:32.402 }, 00:04:32.402 "nvme_tcp": { 00:04:32.402 "mask": "0x2000", 00:04:32.402 "tpoint_mask": "0x0" 00:04:32.402 }, 00:04:32.402 "bdev_nvme": { 00:04:32.402 "mask": "0x4000", 00:04:32.402 "tpoint_mask": "0x0" 00:04:32.402 }, 00:04:32.402 "sock": { 00:04:32.402 "mask": "0x8000", 00:04:32.402 "tpoint_mask": "0x0" 00:04:32.402 }, 00:04:32.402 "blob": { 00:04:32.402 "mask": "0x10000", 00:04:32.402 "tpoint_mask": "0x0" 00:04:32.402 }, 00:04:32.402 "bdev_raid": { 00:04:32.402 "mask": "0x20000", 00:04:32.402 "tpoint_mask": "0x0" 00:04:32.402 }, 00:04:32.402 "scheduler": { 00:04:32.402 "mask": "0x40000", 00:04:32.402 "tpoint_mask": "0x0" 00:04:32.402 } 00:04:32.402 }' 00:04:32.402 01:30:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:32.402 01:30:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:32.402 01:30:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:32.661 01:30:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:32.661 01:30:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:32.661 01:30:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:32.661 01:30:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:32.661 01:30:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:32.661 01:30:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:32.661 01:30:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:32.661 00:04:32.661 real 0m0.180s 00:04:32.661 user 0m0.151s 00:04:32.661 sys 0m0.019s 00:04:32.661 ************************************ 00:04:32.661 END TEST rpc_trace_cmd_test 00:04:32.661 ************************************ 00:04:32.661 01:30:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.661 01:30:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:32.661 01:30:16 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:32.661 01:30:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:32.661 01:30:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:32.661 01:30:16 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.661 01:30:16 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.661 01:30:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.661 ************************************ 00:04:32.661 START TEST rpc_daemon_integrity 00:04:32.662 ************************************ 00:04:32.662 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:32.662 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:32.662 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.662 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.662 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.662 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:32.662 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:32.662 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:32.662 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:32.662 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.662 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.662 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.662 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:32.662 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:32.662 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.662 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.662 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.662 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:32.662 { 00:04:32.662 "name": "Malloc2", 00:04:32.662 "aliases": [ 00:04:32.662 "cf0773b9-5be5-466c-93e2-39bf6ca4f1cc" 00:04:32.662 ], 00:04:32.662 "product_name": "Malloc disk", 00:04:32.662 "block_size": 512, 00:04:32.662 "num_blocks": 16384, 00:04:32.662 "uuid": "cf0773b9-5be5-466c-93e2-39bf6ca4f1cc", 00:04:32.662 "assigned_rate_limits": { 00:04:32.662 "rw_ios_per_sec": 0, 00:04:32.662 "rw_mbytes_per_sec": 0, 00:04:32.662 "r_mbytes_per_sec": 0, 00:04:32.662 "w_mbytes_per_sec": 0 00:04:32.662 }, 00:04:32.662 "claimed": false, 00:04:32.662 "zoned": false, 00:04:32.662 "supported_io_types": { 00:04:32.662 "read": true, 00:04:32.662 "write": true, 00:04:32.662 "unmap": true, 00:04:32.662 "flush": true, 00:04:32.662 "reset": true, 00:04:32.662 "nvme_admin": false, 00:04:32.662 "nvme_io": false, 00:04:32.662 "nvme_io_md": false, 00:04:32.662 "write_zeroes": true, 00:04:32.662 "zcopy": true, 00:04:32.662 "get_zone_info": false, 00:04:32.662 "zone_management": false, 00:04:32.662 "zone_append": false, 00:04:32.662 "compare": false, 00:04:32.662 "compare_and_write": false, 00:04:32.662 "abort": true, 00:04:32.662 "seek_hole": false, 00:04:32.662 "seek_data": false, 00:04:32.662 "copy": true, 00:04:32.662 "nvme_iov_md": false 00:04:32.662 }, 00:04:32.662 "memory_domains": [ 00:04:32.662 { 00:04:32.662 "dma_device_id": "system", 00:04:32.662 "dma_device_type": 1 00:04:32.662 }, 00:04:32.662 { 00:04:32.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.662 "dma_device_type": 2 00:04:32.662 } 00:04:32.662 ], 00:04:32.662 "driver_specific": {} 00:04:32.662 } 00:04:32.662 ]' 00:04:32.662 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.921 [2024-11-21 01:30:16.626388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:32.921 [2024-11-21 01:30:16.626471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:32.921 [2024-11-21 01:30:16.626497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:32.921 [2024-11-21 01:30:16.626509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:32.921 [2024-11-21 01:30:16.628810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:32.921 [2024-11-21 01:30:16.628857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:32.921 Passthru0 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:32.921 { 00:04:32.921 "name": "Malloc2", 00:04:32.921 "aliases": [ 00:04:32.921 "cf0773b9-5be5-466c-93e2-39bf6ca4f1cc" 00:04:32.921 ], 00:04:32.921 "product_name": "Malloc disk", 00:04:32.921 "block_size": 512, 00:04:32.921 "num_blocks": 16384, 00:04:32.921 "uuid": "cf0773b9-5be5-466c-93e2-39bf6ca4f1cc", 00:04:32.921 "assigned_rate_limits": { 00:04:32.921 "rw_ios_per_sec": 0, 00:04:32.921 "rw_mbytes_per_sec": 0, 00:04:32.921 "r_mbytes_per_sec": 0, 00:04:32.921 "w_mbytes_per_sec": 0 00:04:32.921 }, 00:04:32.921 "claimed": true, 00:04:32.921 "claim_type": "exclusive_write", 00:04:32.921 "zoned": false, 00:04:32.921 "supported_io_types": { 00:04:32.921 "read": true, 00:04:32.921 "write": true, 00:04:32.921 "unmap": true, 00:04:32.921 "flush": true, 00:04:32.921 "reset": true, 00:04:32.921 "nvme_admin": false, 00:04:32.921 "nvme_io": false, 00:04:32.921 "nvme_io_md": false, 00:04:32.921 "write_zeroes": true, 00:04:32.921 "zcopy": true, 00:04:32.921 "get_zone_info": false, 00:04:32.921 "zone_management": false, 00:04:32.921 "zone_append": false, 00:04:32.921 "compare": false, 00:04:32.921 "compare_and_write": false, 00:04:32.921 "abort": true, 00:04:32.921 "seek_hole": false, 00:04:32.921 "seek_data": false, 00:04:32.921 "copy": true, 00:04:32.921 "nvme_iov_md": false 00:04:32.921 }, 00:04:32.921 "memory_domains": [ 00:04:32.921 { 00:04:32.921 "dma_device_id": "system", 00:04:32.921 "dma_device_type": 1 00:04:32.921 }, 00:04:32.921 { 00:04:32.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.921 "dma_device_type": 2 00:04:32.921 } 00:04:32.921 ], 00:04:32.921 "driver_specific": {} 00:04:32.921 }, 00:04:32.921 { 00:04:32.921 "name": "Passthru0", 00:04:32.921 "aliases": [ 00:04:32.921 "77b3ebff-689c-5357-ab07-54103ccdb5e4" 00:04:32.921 ], 00:04:32.921 "product_name": "passthru", 00:04:32.921 "block_size": 512, 00:04:32.921 "num_blocks": 16384, 00:04:32.921 "uuid": "77b3ebff-689c-5357-ab07-54103ccdb5e4", 00:04:32.921 "assigned_rate_limits": { 00:04:32.921 "rw_ios_per_sec": 0, 00:04:32.921 "rw_mbytes_per_sec": 0, 00:04:32.921 "r_mbytes_per_sec": 0, 00:04:32.921 "w_mbytes_per_sec": 0 00:04:32.921 }, 00:04:32.921 "claimed": false, 00:04:32.921 "zoned": false, 00:04:32.921 "supported_io_types": { 00:04:32.921 "read": true, 00:04:32.921 "write": true, 00:04:32.921 "unmap": true, 00:04:32.921 "flush": true, 00:04:32.921 "reset": true, 00:04:32.921 "nvme_admin": false, 00:04:32.921 "nvme_io": false, 00:04:32.921 "nvme_io_md": false, 00:04:32.921 "write_zeroes": true, 00:04:32.921 "zcopy": true, 00:04:32.921 "get_zone_info": false, 00:04:32.921 "zone_management": false, 00:04:32.921 "zone_append": false, 00:04:32.921 "compare": false, 00:04:32.921 "compare_and_write": false, 00:04:32.921 "abort": true, 00:04:32.921 "seek_hole": false, 00:04:32.921 "seek_data": false, 00:04:32.921 "copy": true, 00:04:32.921 "nvme_iov_md": false 00:04:32.921 }, 00:04:32.921 "memory_domains": [ 00:04:32.921 { 00:04:32.921 "dma_device_id": "system", 00:04:32.921 "dma_device_type": 1 00:04:32.921 }, 00:04:32.921 { 00:04:32.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.921 "dma_device_type": 2 00:04:32.921 } 00:04:32.921 ], 00:04:32.921 "driver_specific": { 00:04:32.921 "passthru": { 00:04:32.921 "name": "Passthru0", 00:04:32.921 "base_bdev_name": "Malloc2" 00:04:32.921 } 00:04:32.921 } 00:04:32.921 } 00:04:32.921 ]' 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:32.921 00:04:32.921 real 0m0.235s 00:04:32.921 user 0m0.115s 00:04:32.921 sys 0m0.044s 00:04:32.921 ************************************ 00:04:32.921 END TEST rpc_daemon_integrity 00:04:32.921 ************************************ 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.921 01:30:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.921 01:30:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:32.921 01:30:16 rpc -- rpc/rpc.sh@84 -- # killprocess 57100 00:04:32.922 01:30:16 rpc -- common/autotest_common.sh@954 -- # '[' -z 57100 ']' 00:04:32.922 01:30:16 rpc -- common/autotest_common.sh@958 -- # kill -0 57100 00:04:32.922 01:30:16 rpc -- common/autotest_common.sh@959 -- # uname 00:04:32.922 01:30:16 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.922 01:30:16 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57100 00:04:32.922 01:30:16 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.922 killing process with pid 57100 00:04:32.922 01:30:16 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.922 01:30:16 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57100' 00:04:32.922 01:30:16 rpc -- common/autotest_common.sh@973 -- # kill 57100 00:04:32.922 01:30:16 rpc -- common/autotest_common.sh@978 -- # wait 57100 00:04:34.300 00:04:34.300 real 0m3.500s 00:04:34.300 user 0m3.799s 00:04:34.300 sys 0m0.711s 00:04:34.300 ************************************ 00:04:34.300 END TEST rpc 00:04:34.300 ************************************ 00:04:34.300 01:30:18 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.300 01:30:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.300 01:30:18 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:34.300 01:30:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.300 01:30:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.300 01:30:18 -- common/autotest_common.sh@10 -- # set +x 00:04:34.300 ************************************ 00:04:34.300 START TEST skip_rpc 00:04:34.300 ************************************ 00:04:34.300 01:30:18 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:34.300 * Looking for test storage... 00:04:34.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:34.300 01:30:18 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:34.300 01:30:18 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:34.300 01:30:18 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:34.559 01:30:18 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.559 01:30:18 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:34.559 01:30:18 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.559 01:30:18 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:34.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.559 --rc genhtml_branch_coverage=1 00:04:34.559 --rc genhtml_function_coverage=1 00:04:34.559 --rc genhtml_legend=1 00:04:34.559 --rc geninfo_all_blocks=1 00:04:34.559 --rc geninfo_unexecuted_blocks=1 00:04:34.559 00:04:34.559 ' 00:04:34.559 01:30:18 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:34.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.559 --rc genhtml_branch_coverage=1 00:04:34.559 --rc genhtml_function_coverage=1 00:04:34.559 --rc genhtml_legend=1 00:04:34.559 --rc geninfo_all_blocks=1 00:04:34.559 --rc geninfo_unexecuted_blocks=1 00:04:34.559 00:04:34.559 ' 00:04:34.559 01:30:18 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:34.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.559 --rc genhtml_branch_coverage=1 00:04:34.559 --rc genhtml_function_coverage=1 00:04:34.559 --rc genhtml_legend=1 00:04:34.559 --rc geninfo_all_blocks=1 00:04:34.559 --rc geninfo_unexecuted_blocks=1 00:04:34.559 00:04:34.559 ' 00:04:34.559 01:30:18 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:34.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.559 --rc genhtml_branch_coverage=1 00:04:34.559 --rc genhtml_function_coverage=1 00:04:34.559 --rc genhtml_legend=1 00:04:34.559 --rc geninfo_all_blocks=1 00:04:34.559 --rc geninfo_unexecuted_blocks=1 00:04:34.559 00:04:34.559 ' 00:04:34.559 01:30:18 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:34.559 01:30:18 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:34.559 01:30:18 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:34.559 01:30:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.560 01:30:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.560 01:30:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.560 ************************************ 00:04:34.560 START TEST skip_rpc 00:04:34.560 ************************************ 00:04:34.560 01:30:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:34.560 01:30:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57307 00:04:34.560 01:30:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.560 01:30:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:34.560 01:30:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:34.560 [2024-11-21 01:30:18.371395] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:04:34.560 [2024-11-21 01:30:18.371552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57307 ] 00:04:34.818 [2024-11-21 01:30:18.534068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.818 [2024-11-21 01:30:18.640146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.081 01:30:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:40.081 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:40.081 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:40.081 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:40.081 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.081 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:40.081 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.081 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:40.081 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.081 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.081 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:40.081 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:40.081 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.081 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:40.081 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.081 01:30:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:40.081 01:30:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57307 00:04:40.082 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57307 ']' 00:04:40.082 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57307 00:04:40.082 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:40.082 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.082 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57307 00:04:40.082 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.082 killing process with pid 57307 00:04:40.082 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.082 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57307' 00:04:40.082 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57307 00:04:40.082 01:30:23 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57307 00:04:40.649 00:04:40.649 real 0m6.270s 00:04:40.649 user 0m5.828s 00:04:40.649 sys 0m0.339s 00:04:40.649 01:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.649 01:30:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.649 ************************************ 00:04:40.649 END TEST skip_rpc 00:04:40.649 ************************************ 00:04:40.649 01:30:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:40.649 01:30:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.649 01:30:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.649 01:30:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.649 ************************************ 00:04:40.649 START TEST skip_rpc_with_json 00:04:40.649 ************************************ 00:04:40.650 01:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:40.650 01:30:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:40.650 01:30:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57406 00:04:40.650 01:30:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.650 01:30:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57406 00:04:40.650 01:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57406 ']' 00:04:40.650 01:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.650 01:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.650 01:30:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.650 01:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.650 01:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.650 01:30:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.908 [2024-11-21 01:30:24.664634] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:04:40.908 [2024-11-21 01:30:24.664755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57406 ] 00:04:40.908 [2024-11-21 01:30:24.818966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.167 [2024-11-21 01:30:24.913172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.758 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.758 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:41.758 01:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:41.758 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.758 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.758 [2024-11-21 01:30:25.489993] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:41.758 request: 00:04:41.758 { 00:04:41.758 "trtype": "tcp", 00:04:41.758 "method": "nvmf_get_transports", 00:04:41.758 "req_id": 1 00:04:41.758 } 00:04:41.758 Got JSON-RPC error response 00:04:41.758 response: 00:04:41.758 { 00:04:41.758 "code": -19, 00:04:41.758 "message": "No such device" 00:04:41.758 } 00:04:41.758 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:41.758 01:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:41.758 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.758 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.758 [2024-11-21 01:30:25.498105] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:41.758 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.758 01:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:41.758 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.758 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.758 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.758 01:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:41.758 { 00:04:41.758 "subsystems": [ 00:04:41.758 { 00:04:41.758 "subsystem": "fsdev", 00:04:41.758 "config": [ 00:04:41.758 { 00:04:41.758 "method": "fsdev_set_opts", 00:04:41.758 "params": { 00:04:41.758 "fsdev_io_pool_size": 65535, 00:04:41.758 "fsdev_io_cache_size": 256 00:04:41.758 } 00:04:41.758 } 00:04:41.758 ] 00:04:41.758 }, 00:04:41.758 { 00:04:41.758 "subsystem": "keyring", 00:04:41.758 "config": [] 00:04:41.758 }, 00:04:41.758 { 00:04:41.758 "subsystem": "iobuf", 00:04:41.758 "config": [ 00:04:41.758 { 00:04:41.758 "method": "iobuf_set_options", 00:04:41.758 "params": { 00:04:41.758 "small_pool_count": 8192, 00:04:41.758 "large_pool_count": 1024, 00:04:41.758 "small_bufsize": 8192, 00:04:41.758 "large_bufsize": 135168, 00:04:41.758 "enable_numa": false 00:04:41.758 } 00:04:41.758 } 00:04:41.758 ] 00:04:41.758 }, 00:04:41.758 { 00:04:41.758 "subsystem": "sock", 00:04:41.758 "config": [ 00:04:41.758 { 00:04:41.758 "method": "sock_set_default_impl", 00:04:41.758 "params": { 00:04:41.758 "impl_name": "posix" 00:04:41.758 } 00:04:41.758 }, 00:04:41.758 { 00:04:41.758 "method": "sock_impl_set_options", 00:04:41.758 "params": { 00:04:41.758 "impl_name": "ssl", 00:04:41.758 "recv_buf_size": 4096, 00:04:41.758 "send_buf_size": 4096, 00:04:41.758 "enable_recv_pipe": true, 00:04:41.758 "enable_quickack": false, 00:04:41.758 "enable_placement_id": 0, 00:04:41.758 "enable_zerocopy_send_server": true, 00:04:41.758 "enable_zerocopy_send_client": false, 00:04:41.758 "zerocopy_threshold": 0, 00:04:41.758 "tls_version": 0, 00:04:41.758 "enable_ktls": false 00:04:41.758 } 00:04:41.758 }, 00:04:41.758 { 00:04:41.758 "method": "sock_impl_set_options", 00:04:41.758 "params": { 00:04:41.758 "impl_name": "posix", 00:04:41.758 "recv_buf_size": 2097152, 00:04:41.758 "send_buf_size": 2097152, 00:04:41.758 "enable_recv_pipe": true, 00:04:41.758 "enable_quickack": false, 00:04:41.758 "enable_placement_id": 0, 00:04:41.758 "enable_zerocopy_send_server": true, 00:04:41.758 "enable_zerocopy_send_client": false, 00:04:41.758 "zerocopy_threshold": 0, 00:04:41.758 "tls_version": 0, 00:04:41.758 "enable_ktls": false 00:04:41.758 } 00:04:41.758 } 00:04:41.758 ] 00:04:41.758 }, 00:04:41.758 { 00:04:41.758 "subsystem": "vmd", 00:04:41.758 "config": [] 00:04:41.758 }, 00:04:41.758 { 00:04:41.758 "subsystem": "accel", 00:04:41.758 "config": [ 00:04:41.758 { 00:04:41.758 "method": "accel_set_options", 00:04:41.758 "params": { 00:04:41.758 "small_cache_size": 128, 00:04:41.758 "large_cache_size": 16, 00:04:41.758 "task_count": 2048, 00:04:41.758 "sequence_count": 2048, 00:04:41.759 "buf_count": 2048 00:04:41.759 } 00:04:41.759 } 00:04:41.759 ] 00:04:41.759 }, 00:04:41.759 { 00:04:41.759 "subsystem": "bdev", 00:04:41.759 "config": [ 00:04:41.759 { 00:04:41.759 "method": "bdev_set_options", 00:04:41.759 "params": { 00:04:41.759 "bdev_io_pool_size": 65535, 00:04:41.759 "bdev_io_cache_size": 256, 00:04:41.759 "bdev_auto_examine": true, 00:04:41.759 "iobuf_small_cache_size": 128, 00:04:41.759 "iobuf_large_cache_size": 16 00:04:41.759 } 00:04:41.759 }, 00:04:41.759 { 00:04:41.759 "method": "bdev_raid_set_options", 00:04:41.759 "params": { 00:04:41.759 "process_window_size_kb": 1024, 00:04:41.759 "process_max_bandwidth_mb_sec": 0 00:04:41.759 } 00:04:41.759 }, 00:04:41.759 { 00:04:41.759 "method": "bdev_iscsi_set_options", 00:04:41.759 "params": { 00:04:41.759 "timeout_sec": 30 00:04:41.759 } 00:04:41.759 }, 00:04:41.759 { 00:04:41.759 "method": "bdev_nvme_set_options", 00:04:41.759 "params": { 00:04:41.759 "action_on_timeout": "none", 00:04:41.759 "timeout_us": 0, 00:04:41.759 "timeout_admin_us": 0, 00:04:41.759 "keep_alive_timeout_ms": 10000, 00:04:41.759 "arbitration_burst": 0, 00:04:41.759 "low_priority_weight": 0, 00:04:41.759 "medium_priority_weight": 0, 00:04:41.759 "high_priority_weight": 0, 00:04:41.759 "nvme_adminq_poll_period_us": 10000, 00:04:41.759 "nvme_ioq_poll_period_us": 0, 00:04:41.759 "io_queue_requests": 0, 00:04:41.759 "delay_cmd_submit": true, 00:04:41.759 "transport_retry_count": 4, 00:04:41.759 "bdev_retry_count": 3, 00:04:41.759 "transport_ack_timeout": 0, 00:04:41.759 "ctrlr_loss_timeout_sec": 0, 00:04:41.759 "reconnect_delay_sec": 0, 00:04:41.759 "fast_io_fail_timeout_sec": 0, 00:04:41.759 "disable_auto_failback": false, 00:04:41.759 "generate_uuids": false, 00:04:41.759 "transport_tos": 0, 00:04:41.759 "nvme_error_stat": false, 00:04:41.759 "rdma_srq_size": 0, 00:04:41.759 "io_path_stat": false, 00:04:41.759 "allow_accel_sequence": false, 00:04:41.759 "rdma_max_cq_size": 0, 00:04:41.759 "rdma_cm_event_timeout_ms": 0, 00:04:41.759 "dhchap_digests": [ 00:04:41.759 "sha256", 00:04:41.759 "sha384", 00:04:41.759 "sha512" 00:04:41.759 ], 00:04:41.759 "dhchap_dhgroups": [ 00:04:41.759 "null", 00:04:41.759 "ffdhe2048", 00:04:41.759 "ffdhe3072", 00:04:41.759 "ffdhe4096", 00:04:41.759 "ffdhe6144", 00:04:41.759 "ffdhe8192" 00:04:41.759 ] 00:04:41.759 } 00:04:41.759 }, 00:04:41.759 { 00:04:41.759 "method": "bdev_nvme_set_hotplug", 00:04:41.759 "params": { 00:04:41.759 "period_us": 100000, 00:04:41.759 "enable": false 00:04:41.759 } 00:04:41.759 }, 00:04:41.759 { 00:04:41.759 "method": "bdev_wait_for_examine" 00:04:41.759 } 00:04:41.759 ] 00:04:41.759 }, 00:04:41.759 { 00:04:41.759 "subsystem": "scsi", 00:04:41.759 "config": null 00:04:41.759 }, 00:04:41.759 { 00:04:41.759 "subsystem": "scheduler", 00:04:41.759 "config": [ 00:04:41.759 { 00:04:41.759 "method": "framework_set_scheduler", 00:04:41.759 "params": { 00:04:41.759 "name": "static" 00:04:41.759 } 00:04:41.759 } 00:04:41.759 ] 00:04:41.759 }, 00:04:41.759 { 00:04:41.759 "subsystem": "vhost_scsi", 00:04:41.759 "config": [] 00:04:41.759 }, 00:04:41.759 { 00:04:41.759 "subsystem": "vhost_blk", 00:04:41.759 "config": [] 00:04:41.759 }, 00:04:41.759 { 00:04:41.759 "subsystem": "ublk", 00:04:41.759 "config": [] 00:04:41.759 }, 00:04:41.759 { 00:04:41.759 "subsystem": "nbd", 00:04:41.759 "config": [] 00:04:41.759 }, 00:04:41.759 { 00:04:41.759 "subsystem": "nvmf", 00:04:41.759 "config": [ 00:04:41.759 { 00:04:41.759 "method": "nvmf_set_config", 00:04:41.759 "params": { 00:04:41.759 "discovery_filter": "match_any", 00:04:41.759 "admin_cmd_passthru": { 00:04:41.759 "identify_ctrlr": false 00:04:41.759 }, 00:04:41.759 "dhchap_digests": [ 00:04:41.759 "sha256", 00:04:41.759 "sha384", 00:04:41.759 "sha512" 00:04:41.759 ], 00:04:41.759 "dhchap_dhgroups": [ 00:04:41.759 "null", 00:04:41.759 "ffdhe2048", 00:04:41.759 "ffdhe3072", 00:04:41.759 "ffdhe4096", 00:04:41.759 "ffdhe6144", 00:04:41.759 "ffdhe8192" 00:04:41.759 ] 00:04:41.759 } 00:04:41.759 }, 00:04:41.759 { 00:04:41.759 "method": "nvmf_set_max_subsystems", 00:04:41.759 "params": { 00:04:41.759 "max_subsystems": 1024 00:04:41.759 } 00:04:41.759 }, 00:04:41.759 { 00:04:41.759 "method": "nvmf_set_crdt", 00:04:41.759 "params": { 00:04:41.759 "crdt1": 0, 00:04:41.759 "crdt2": 0, 00:04:41.759 "crdt3": 0 00:04:41.759 } 00:04:41.759 }, 00:04:41.759 { 00:04:41.759 "method": "nvmf_create_transport", 00:04:41.759 "params": { 00:04:41.759 "trtype": "TCP", 00:04:41.759 "max_queue_depth": 128, 00:04:41.759 "max_io_qpairs_per_ctrlr": 127, 00:04:41.759 "in_capsule_data_size": 4096, 00:04:41.759 "max_io_size": 131072, 00:04:41.759 "io_unit_size": 131072, 00:04:41.759 "max_aq_depth": 128, 00:04:41.759 "num_shared_buffers": 511, 00:04:41.759 "buf_cache_size": 4294967295, 00:04:41.759 "dif_insert_or_strip": false, 00:04:41.759 "zcopy": false, 00:04:41.759 "c2h_success": true, 00:04:41.759 "sock_priority": 0, 00:04:41.759 "abort_timeout_sec": 1, 00:04:41.759 "ack_timeout": 0, 00:04:41.759 "data_wr_pool_size": 0 00:04:41.759 } 00:04:41.759 } 00:04:41.759 ] 00:04:41.759 }, 00:04:41.759 { 00:04:41.759 "subsystem": "iscsi", 00:04:41.759 "config": [ 00:04:41.759 { 00:04:41.759 "method": "iscsi_set_options", 00:04:41.759 "params": { 00:04:41.759 "node_base": "iqn.2016-06.io.spdk", 00:04:41.759 "max_sessions": 128, 00:04:41.759 "max_connections_per_session": 2, 00:04:41.759 "max_queue_depth": 64, 00:04:41.759 "default_time2wait": 2, 00:04:41.759 "default_time2retain": 20, 00:04:41.759 "first_burst_length": 8192, 00:04:41.759 "immediate_data": true, 00:04:41.759 "allow_duplicated_isid": false, 00:04:41.759 "error_recovery_level": 0, 00:04:41.759 "nop_timeout": 60, 00:04:41.759 "nop_in_interval": 30, 00:04:41.759 "disable_chap": false, 00:04:41.759 "require_chap": false, 00:04:41.759 "mutual_chap": false, 00:04:41.759 "chap_group": 0, 00:04:41.759 "max_large_datain_per_connection": 64, 00:04:41.759 "max_r2t_per_connection": 4, 00:04:41.759 "pdu_pool_size": 36864, 00:04:41.759 "immediate_data_pool_size": 16384, 00:04:41.759 "data_out_pool_size": 2048 00:04:41.759 } 00:04:41.759 } 00:04:41.759 ] 00:04:41.759 } 00:04:41.759 ] 00:04:41.759 } 00:04:41.759 01:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:41.759 01:30:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57406 00:04:41.759 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57406 ']' 00:04:41.759 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57406 00:04:41.759 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:41.759 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.759 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57406 00:04:41.759 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.759 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.759 killing process with pid 57406 00:04:41.759 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57406' 00:04:41.759 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57406 00:04:41.759 01:30:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57406 00:04:43.673 01:30:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57445 00:04:43.673 01:30:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:43.673 01:30:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:48.952 01:30:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57445 00:04:48.952 01:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57445 ']' 00:04:48.952 01:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57445 00:04:48.952 01:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:48.952 01:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.952 01:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57445 00:04:48.952 01:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.952 01:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.952 killing process with pid 57445 00:04:48.952 01:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57445' 00:04:48.952 01:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57445 00:04:48.952 01:30:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57445 00:04:49.519 01:30:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:49.519 01:30:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:49.519 00:04:49.519 real 0m8.798s 00:04:49.519 user 0m8.284s 00:04:49.519 sys 0m0.720s 00:04:49.519 01:30:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.519 01:30:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.519 ************************************ 00:04:49.519 END TEST skip_rpc_with_json 00:04:49.519 ************************************ 00:04:49.519 01:30:33 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:49.519 01:30:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.519 01:30:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.519 01:30:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.519 ************************************ 00:04:49.519 START TEST skip_rpc_with_delay 00:04:49.519 ************************************ 00:04:49.519 01:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:49.519 01:30:33 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.519 01:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:49.519 01:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.519 01:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.519 01:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.519 01:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.519 01:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.519 01:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.519 01:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:49.519 01:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:49.519 01:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:49.519 01:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.777 [2024-11-21 01:30:33.502080] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:49.777 01:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:49.777 01:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:49.777 01:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:49.777 01:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:49.777 00:04:49.777 real 0m0.122s 00:04:49.777 user 0m0.069s 00:04:49.777 sys 0m0.050s 00:04:49.777 01:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.777 ************************************ 00:04:49.777 END TEST skip_rpc_with_delay 00:04:49.777 ************************************ 00:04:49.777 01:30:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:49.777 01:30:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:49.777 01:30:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:49.777 01:30:33 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:49.777 01:30:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.777 01:30:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.777 01:30:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.777 ************************************ 00:04:49.777 START TEST exit_on_failed_rpc_init 00:04:49.777 ************************************ 00:04:49.777 01:30:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:49.777 01:30:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57562 00:04:49.777 01:30:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57562 00:04:49.777 01:30:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57562 ']' 00:04:49.777 01:30:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.777 01:30:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.777 01:30:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.777 01:30:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.777 01:30:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:49.777 01:30:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.777 [2024-11-21 01:30:33.667572] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:04:49.777 [2024-11-21 01:30:33.667705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57562 ] 00:04:50.035 [2024-11-21 01:30:33.821689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.035 [2024-11-21 01:30:33.902493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.601 01:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.601 01:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:50.601 01:30:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.601 01:30:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.601 01:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:50.601 01:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.601 01:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.601 01:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.601 01:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.601 01:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.601 01:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.601 01:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:50.601 01:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.601 01:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:50.601 01:30:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.858 [2024-11-21 01:30:34.571563] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:04:50.858 [2024-11-21 01:30:34.571698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57580 ] 00:04:50.858 [2024-11-21 01:30:34.732188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.116 [2024-11-21 01:30:34.828179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.116 [2024-11-21 01:30:34.828275] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:51.116 [2024-11-21 01:30:34.828289] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:51.116 [2024-11-21 01:30:34.828301] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:51.116 01:30:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:51.116 01:30:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:51.116 01:30:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:51.116 01:30:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:51.116 01:30:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:51.116 01:30:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:51.116 01:30:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:51.116 01:30:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57562 00:04:51.116 01:30:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57562 ']' 00:04:51.116 01:30:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57562 00:04:51.116 01:30:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:51.116 01:30:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.116 01:30:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57562 00:04:51.116 01:30:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.116 killing process with pid 57562 00:04:51.116 01:30:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.116 01:30:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57562' 00:04:51.116 01:30:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57562 00:04:51.116 01:30:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57562 00:04:52.519 00:04:52.519 real 0m2.578s 00:04:52.519 user 0m2.896s 00:04:52.519 sys 0m0.393s 00:04:52.519 ************************************ 00:04:52.519 END TEST exit_on_failed_rpc_init 00:04:52.520 ************************************ 00:04:52.520 01:30:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.520 01:30:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:52.520 01:30:36 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:52.520 00:04:52.520 real 0m18.086s 00:04:52.520 user 0m17.217s 00:04:52.520 sys 0m1.675s 00:04:52.520 01:30:36 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.520 01:30:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.520 ************************************ 00:04:52.520 END TEST skip_rpc 00:04:52.520 ************************************ 00:04:52.520 01:30:36 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:52.520 01:30:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.520 01:30:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.520 01:30:36 -- common/autotest_common.sh@10 -- # set +x 00:04:52.520 ************************************ 00:04:52.520 START TEST rpc_client 00:04:52.520 ************************************ 00:04:52.520 01:30:36 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:52.520 * Looking for test storage... 00:04:52.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:52.520 01:30:36 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:52.520 01:30:36 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:52.520 01:30:36 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:52.520 01:30:36 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.520 01:30:36 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:52.520 01:30:36 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.520 01:30:36 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:52.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.520 --rc genhtml_branch_coverage=1 00:04:52.520 --rc genhtml_function_coverage=1 00:04:52.520 --rc genhtml_legend=1 00:04:52.520 --rc geninfo_all_blocks=1 00:04:52.520 --rc geninfo_unexecuted_blocks=1 00:04:52.520 00:04:52.520 ' 00:04:52.520 01:30:36 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:52.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.520 --rc genhtml_branch_coverage=1 00:04:52.520 --rc genhtml_function_coverage=1 00:04:52.520 --rc genhtml_legend=1 00:04:52.520 --rc geninfo_all_blocks=1 00:04:52.520 --rc geninfo_unexecuted_blocks=1 00:04:52.520 00:04:52.520 ' 00:04:52.520 01:30:36 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:52.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.520 --rc genhtml_branch_coverage=1 00:04:52.520 --rc genhtml_function_coverage=1 00:04:52.520 --rc genhtml_legend=1 00:04:52.520 --rc geninfo_all_blocks=1 00:04:52.520 --rc geninfo_unexecuted_blocks=1 00:04:52.520 00:04:52.520 ' 00:04:52.520 01:30:36 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:52.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.520 --rc genhtml_branch_coverage=1 00:04:52.520 --rc genhtml_function_coverage=1 00:04:52.520 --rc genhtml_legend=1 00:04:52.520 --rc geninfo_all_blocks=1 00:04:52.520 --rc geninfo_unexecuted_blocks=1 00:04:52.520 00:04:52.520 ' 00:04:52.520 01:30:36 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:52.520 OK 00:04:52.520 01:30:36 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:52.520 00:04:52.520 real 0m0.178s 00:04:52.520 user 0m0.100s 00:04:52.520 sys 0m0.080s 00:04:52.520 01:30:36 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.520 01:30:36 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:52.520 ************************************ 00:04:52.520 END TEST rpc_client 00:04:52.520 ************************************ 00:04:52.520 01:30:36 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:52.520 01:30:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.520 01:30:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.520 01:30:36 -- common/autotest_common.sh@10 -- # set +x 00:04:52.520 ************************************ 00:04:52.520 START TEST json_config 00:04:52.520 ************************************ 00:04:52.520 01:30:36 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:52.780 01:30:36 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:52.780 01:30:36 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:52.780 01:30:36 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:52.780 01:30:36 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:52.780 01:30:36 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.780 01:30:36 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.780 01:30:36 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.780 01:30:36 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.780 01:30:36 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.780 01:30:36 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.780 01:30:36 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.780 01:30:36 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.780 01:30:36 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.780 01:30:36 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.780 01:30:36 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.780 01:30:36 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:52.780 01:30:36 json_config -- scripts/common.sh@345 -- # : 1 00:04:52.780 01:30:36 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.780 01:30:36 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.780 01:30:36 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:52.780 01:30:36 json_config -- scripts/common.sh@353 -- # local d=1 00:04:52.780 01:30:36 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.780 01:30:36 json_config -- scripts/common.sh@355 -- # echo 1 00:04:52.780 01:30:36 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.780 01:30:36 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:52.780 01:30:36 json_config -- scripts/common.sh@353 -- # local d=2 00:04:52.780 01:30:36 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.780 01:30:36 json_config -- scripts/common.sh@355 -- # echo 2 00:04:52.780 01:30:36 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.780 01:30:36 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.780 01:30:36 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.780 01:30:36 json_config -- scripts/common.sh@368 -- # return 0 00:04:52.780 01:30:36 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.780 01:30:36 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:52.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.780 --rc genhtml_branch_coverage=1 00:04:52.780 --rc genhtml_function_coverage=1 00:04:52.780 --rc genhtml_legend=1 00:04:52.780 --rc geninfo_all_blocks=1 00:04:52.780 --rc geninfo_unexecuted_blocks=1 00:04:52.780 00:04:52.780 ' 00:04:52.780 01:30:36 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:52.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.780 --rc genhtml_branch_coverage=1 00:04:52.780 --rc genhtml_function_coverage=1 00:04:52.780 --rc genhtml_legend=1 00:04:52.780 --rc geninfo_all_blocks=1 00:04:52.780 --rc geninfo_unexecuted_blocks=1 00:04:52.780 00:04:52.780 ' 00:04:52.780 01:30:36 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:52.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.780 --rc genhtml_branch_coverage=1 00:04:52.780 --rc genhtml_function_coverage=1 00:04:52.780 --rc genhtml_legend=1 00:04:52.780 --rc geninfo_all_blocks=1 00:04:52.780 --rc geninfo_unexecuted_blocks=1 00:04:52.780 00:04:52.780 ' 00:04:52.780 01:30:36 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:52.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.780 --rc genhtml_branch_coverage=1 00:04:52.780 --rc genhtml_function_coverage=1 00:04:52.780 --rc genhtml_legend=1 00:04:52.780 --rc geninfo_all_blocks=1 00:04:52.780 --rc geninfo_unexecuted_blocks=1 00:04:52.780 00:04:52.780 ' 00:04:52.780 01:30:36 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:917a758e-796b-4413-864e-1c730c68b4e2 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=917a758e-796b-4413-864e-1c730c68b4e2 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:52.780 01:30:36 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:52.780 01:30:36 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.780 01:30:36 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.780 01:30:36 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.780 01:30:36 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.780 01:30:36 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.780 01:30:36 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.780 01:30:36 json_config -- paths/export.sh@5 -- # export PATH 00:04:52.780 01:30:36 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@51 -- # : 0 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:52.780 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:52.780 01:30:36 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:52.780 01:30:36 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:52.780 01:30:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:52.780 01:30:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:52.780 01:30:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:52.780 01:30:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:52.780 01:30:36 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:52.780 WARNING: No tests are enabled so not running JSON configuration tests 00:04:52.780 01:30:36 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:52.780 00:04:52.780 real 0m0.137s 00:04:52.780 user 0m0.087s 00:04:52.780 sys 0m0.053s 00:04:52.780 01:30:36 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.780 01:30:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.780 ************************************ 00:04:52.780 END TEST json_config 00:04:52.780 ************************************ 00:04:52.780 01:30:36 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:52.780 01:30:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.780 01:30:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.780 01:30:36 -- common/autotest_common.sh@10 -- # set +x 00:04:52.780 ************************************ 00:04:52.780 START TEST json_config_extra_key 00:04:52.780 ************************************ 00:04:52.780 01:30:36 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:52.780 01:30:36 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:52.780 01:30:36 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:52.780 01:30:36 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.040 01:30:36 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.040 01:30:36 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:53.040 01:30:36 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.040 01:30:36 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.040 --rc genhtml_branch_coverage=1 00:04:53.040 --rc genhtml_function_coverage=1 00:04:53.040 --rc genhtml_legend=1 00:04:53.040 --rc geninfo_all_blocks=1 00:04:53.040 --rc geninfo_unexecuted_blocks=1 00:04:53.040 00:04:53.040 ' 00:04:53.040 01:30:36 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.040 --rc genhtml_branch_coverage=1 00:04:53.040 --rc genhtml_function_coverage=1 00:04:53.040 --rc genhtml_legend=1 00:04:53.040 --rc geninfo_all_blocks=1 00:04:53.040 --rc geninfo_unexecuted_blocks=1 00:04:53.040 00:04:53.040 ' 00:04:53.040 01:30:36 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.040 --rc genhtml_branch_coverage=1 00:04:53.040 --rc genhtml_function_coverage=1 00:04:53.040 --rc genhtml_legend=1 00:04:53.040 --rc geninfo_all_blocks=1 00:04:53.040 --rc geninfo_unexecuted_blocks=1 00:04:53.040 00:04:53.040 ' 00:04:53.040 01:30:36 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.040 --rc genhtml_branch_coverage=1 00:04:53.040 --rc genhtml_function_coverage=1 00:04:53.040 --rc genhtml_legend=1 00:04:53.040 --rc geninfo_all_blocks=1 00:04:53.040 --rc geninfo_unexecuted_blocks=1 00:04:53.040 00:04:53.040 ' 00:04:53.040 01:30:36 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:53.040 01:30:36 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:53.040 01:30:36 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.040 01:30:36 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:917a758e-796b-4413-864e-1c730c68b4e2 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=917a758e-796b-4413-864e-1c730c68b4e2 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:53.041 01:30:36 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:53.041 01:30:36 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.041 01:30:36 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.041 01:30:36 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.041 01:30:36 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.041 01:30:36 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.041 01:30:36 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.041 01:30:36 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:53.041 01:30:36 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:53.041 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:53.041 01:30:36 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:53.041 01:30:36 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:53.041 01:30:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:53.041 01:30:36 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:53.041 01:30:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:53.041 01:30:36 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:53.041 01:30:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:53.041 01:30:36 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:53.041 01:30:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:53.041 01:30:36 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:53.041 01:30:36 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:53.041 INFO: launching applications... 00:04:53.041 01:30:36 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:53.041 01:30:36 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:53.041 01:30:36 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:53.041 01:30:36 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:53.041 01:30:36 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:53.041 01:30:36 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:53.041 01:30:36 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:53.041 01:30:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.041 01:30:36 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.041 01:30:36 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57769 00:04:53.041 01:30:36 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:53.041 Waiting for target to run... 00:04:53.041 01:30:36 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57769 /var/tmp/spdk_tgt.sock 00:04:53.041 01:30:36 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:53.041 01:30:36 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57769 ']' 00:04:53.041 01:30:36 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:53.041 01:30:36 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:53.041 01:30:36 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:53.041 01:30:36 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.041 01:30:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:53.041 [2024-11-21 01:30:36.847461] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:04:53.041 [2024-11-21 01:30:36.847731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57769 ] 00:04:53.300 [2024-11-21 01:30:37.175325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.559 [2024-11-21 01:30:37.266104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.817 01:30:37 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.817 01:30:37 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:53.817 01:30:37 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:53.817 00:04:53.817 01:30:37 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:53.817 INFO: shutting down applications... 00:04:53.817 01:30:37 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:53.817 01:30:37 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:53.817 01:30:37 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:53.817 01:30:37 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57769 ]] 00:04:53.817 01:30:37 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57769 00:04:53.817 01:30:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:53.817 01:30:37 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.817 01:30:37 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57769 00:04:53.817 01:30:37 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:54.382 01:30:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:54.382 01:30:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.382 01:30:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57769 00:04:54.382 01:30:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:54.952 01:30:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:54.952 01:30:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.952 01:30:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57769 00:04:54.952 01:30:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:55.519 01:30:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:55.519 01:30:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:55.519 01:30:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57769 00:04:55.519 01:30:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:56.085 01:30:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:56.085 01:30:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.085 01:30:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57769 00:04:56.085 01:30:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:56.085 01:30:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:56.085 01:30:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:56.085 SPDK target shutdown done 00:04:56.086 01:30:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:56.086 Success 00:04:56.086 01:30:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:56.086 ************************************ 00:04:56.086 END TEST json_config_extra_key 00:04:56.086 ************************************ 00:04:56.086 00:04:56.086 real 0m3.141s 00:04:56.086 user 0m2.678s 00:04:56.086 sys 0m0.392s 00:04:56.086 01:30:39 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.086 01:30:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:56.086 01:30:39 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:56.086 01:30:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.086 01:30:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.086 01:30:39 -- common/autotest_common.sh@10 -- # set +x 00:04:56.086 ************************************ 00:04:56.086 START TEST alias_rpc 00:04:56.086 ************************************ 00:04:56.086 01:30:39 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:56.086 * Looking for test storage... 00:04:56.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:56.086 01:30:39 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:56.086 01:30:39 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:56.086 01:30:39 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:56.086 01:30:39 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.086 01:30:39 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:56.086 01:30:39 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.086 01:30:39 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:56.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.086 --rc genhtml_branch_coverage=1 00:04:56.086 --rc genhtml_function_coverage=1 00:04:56.086 --rc genhtml_legend=1 00:04:56.086 --rc geninfo_all_blocks=1 00:04:56.086 --rc geninfo_unexecuted_blocks=1 00:04:56.086 00:04:56.086 ' 00:04:56.086 01:30:39 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:56.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.086 --rc genhtml_branch_coverage=1 00:04:56.086 --rc genhtml_function_coverage=1 00:04:56.086 --rc genhtml_legend=1 00:04:56.086 --rc geninfo_all_blocks=1 00:04:56.086 --rc geninfo_unexecuted_blocks=1 00:04:56.086 00:04:56.086 ' 00:04:56.086 01:30:39 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:56.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.086 --rc genhtml_branch_coverage=1 00:04:56.086 --rc genhtml_function_coverage=1 00:04:56.086 --rc genhtml_legend=1 00:04:56.086 --rc geninfo_all_blocks=1 00:04:56.086 --rc geninfo_unexecuted_blocks=1 00:04:56.086 00:04:56.086 ' 00:04:56.086 01:30:39 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:56.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.086 --rc genhtml_branch_coverage=1 00:04:56.086 --rc genhtml_function_coverage=1 00:04:56.086 --rc genhtml_legend=1 00:04:56.086 --rc geninfo_all_blocks=1 00:04:56.086 --rc geninfo_unexecuted_blocks=1 00:04:56.086 00:04:56.086 ' 00:04:56.086 01:30:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:56.086 01:30:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57867 00:04:56.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.086 01:30:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57867 00:04:56.086 01:30:39 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57867 ']' 00:04:56.086 01:30:39 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.086 01:30:39 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.086 01:30:39 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.086 01:30:39 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.086 01:30:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.086 01:30:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.086 [2024-11-21 01:30:40.037993] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:04:56.086 [2024-11-21 01:30:40.038284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57867 ] 00:04:56.344 [2024-11-21 01:30:40.195830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.344 [2024-11-21 01:30:40.272842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.910 01:30:40 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.910 01:30:40 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:56.910 01:30:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:57.168 01:30:41 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57867 00:04:57.168 01:30:41 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57867 ']' 00:04:57.168 01:30:41 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57867 00:04:57.168 01:30:41 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:57.168 01:30:41 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.168 01:30:41 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57867 00:04:57.427 killing process with pid 57867 00:04:57.427 01:30:41 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.427 01:30:41 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.427 01:30:41 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57867' 00:04:57.427 01:30:41 alias_rpc -- common/autotest_common.sh@973 -- # kill 57867 00:04:57.427 01:30:41 alias_rpc -- common/autotest_common.sh@978 -- # wait 57867 00:04:58.361 ************************************ 00:04:58.361 END TEST alias_rpc 00:04:58.361 ************************************ 00:04:58.361 00:04:58.361 real 0m2.486s 00:04:58.361 user 0m2.608s 00:04:58.361 sys 0m0.387s 00:04:58.361 01:30:42 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.361 01:30:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.619 01:30:42 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:58.619 01:30:42 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:58.619 01:30:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.619 01:30:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.619 01:30:42 -- common/autotest_common.sh@10 -- # set +x 00:04:58.619 ************************************ 00:04:58.619 START TEST spdkcli_tcp 00:04:58.619 ************************************ 00:04:58.619 01:30:42 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:58.619 * Looking for test storage... 00:04:58.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:58.619 01:30:42 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:58.619 01:30:42 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:58.619 01:30:42 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:58.619 01:30:42 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:58.619 01:30:42 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.619 01:30:42 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.619 01:30:42 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.619 01:30:42 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.619 01:30:42 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.619 01:30:42 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.619 01:30:42 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.619 01:30:42 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.619 01:30:42 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.619 01:30:42 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.620 01:30:42 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.620 01:30:42 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:58.620 01:30:42 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:58.620 01:30:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.620 01:30:42 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.620 01:30:42 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:58.620 01:30:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:58.620 01:30:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.620 01:30:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:58.620 01:30:42 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.620 01:30:42 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:58.620 01:30:42 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:58.620 01:30:42 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.620 01:30:42 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:58.620 01:30:42 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.620 01:30:42 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.620 01:30:42 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.620 01:30:42 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:58.620 01:30:42 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.620 01:30:42 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:58.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.620 --rc genhtml_branch_coverage=1 00:04:58.620 --rc genhtml_function_coverage=1 00:04:58.620 --rc genhtml_legend=1 00:04:58.620 --rc geninfo_all_blocks=1 00:04:58.620 --rc geninfo_unexecuted_blocks=1 00:04:58.620 00:04:58.620 ' 00:04:58.620 01:30:42 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:58.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.620 --rc genhtml_branch_coverage=1 00:04:58.620 --rc genhtml_function_coverage=1 00:04:58.620 --rc genhtml_legend=1 00:04:58.620 --rc geninfo_all_blocks=1 00:04:58.620 --rc geninfo_unexecuted_blocks=1 00:04:58.620 00:04:58.620 ' 00:04:58.620 01:30:42 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:58.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.620 --rc genhtml_branch_coverage=1 00:04:58.620 --rc genhtml_function_coverage=1 00:04:58.620 --rc genhtml_legend=1 00:04:58.620 --rc geninfo_all_blocks=1 00:04:58.620 --rc geninfo_unexecuted_blocks=1 00:04:58.620 00:04:58.620 ' 00:04:58.620 01:30:42 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:58.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.620 --rc genhtml_branch_coverage=1 00:04:58.620 --rc genhtml_function_coverage=1 00:04:58.620 --rc genhtml_legend=1 00:04:58.620 --rc geninfo_all_blocks=1 00:04:58.620 --rc geninfo_unexecuted_blocks=1 00:04:58.620 00:04:58.620 ' 00:04:58.620 01:30:42 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:58.620 01:30:42 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:58.620 01:30:42 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:58.620 01:30:42 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:58.620 01:30:42 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:58.620 01:30:42 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:58.620 01:30:42 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:58.620 01:30:42 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.620 01:30:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.620 01:30:42 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57957 00:04:58.620 01:30:42 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57957 00:04:58.620 01:30:42 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:58.620 01:30:42 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57957 ']' 00:04:58.620 01:30:42 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.620 01:30:42 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.620 01:30:42 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.620 01:30:42 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.620 01:30:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:58.620 [2024-11-21 01:30:42.554655] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:04:58.620 [2024-11-21 01:30:42.554772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57957 ] 00:04:58.879 [2024-11-21 01:30:42.715253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.879 [2024-11-21 01:30:42.815220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.879 [2024-11-21 01:30:42.815387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.813 01:30:43 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.813 01:30:43 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:59.813 01:30:43 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57969 00:04:59.813 01:30:43 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:59.813 01:30:43 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:59.813 [ 00:04:59.813 "bdev_malloc_delete", 00:04:59.813 "bdev_malloc_create", 00:04:59.813 "bdev_null_resize", 00:04:59.813 "bdev_null_delete", 00:04:59.813 "bdev_null_create", 00:04:59.813 "bdev_nvme_cuse_unregister", 00:04:59.813 "bdev_nvme_cuse_register", 00:04:59.813 "bdev_opal_new_user", 00:04:59.813 "bdev_opal_set_lock_state", 00:04:59.813 "bdev_opal_delete", 00:04:59.813 "bdev_opal_get_info", 00:04:59.813 "bdev_opal_create", 00:04:59.813 "bdev_nvme_opal_revert", 00:04:59.813 "bdev_nvme_opal_init", 00:04:59.813 "bdev_nvme_send_cmd", 00:04:59.813 "bdev_nvme_set_keys", 00:04:59.813 "bdev_nvme_get_path_iostat", 00:04:59.813 "bdev_nvme_get_mdns_discovery_info", 00:04:59.813 "bdev_nvme_stop_mdns_discovery", 00:04:59.813 "bdev_nvme_start_mdns_discovery", 00:04:59.813 "bdev_nvme_set_multipath_policy", 00:04:59.813 "bdev_nvme_set_preferred_path", 00:04:59.813 "bdev_nvme_get_io_paths", 00:04:59.813 "bdev_nvme_remove_error_injection", 00:04:59.813 "bdev_nvme_add_error_injection", 00:04:59.813 "bdev_nvme_get_discovery_info", 00:04:59.813 "bdev_nvme_stop_discovery", 00:04:59.813 "bdev_nvme_start_discovery", 00:04:59.813 "bdev_nvme_get_controller_health_info", 00:04:59.813 "bdev_nvme_disable_controller", 00:04:59.813 "bdev_nvme_enable_controller", 00:04:59.813 "bdev_nvme_reset_controller", 00:04:59.813 "bdev_nvme_get_transport_statistics", 00:04:59.813 "bdev_nvme_apply_firmware", 00:04:59.813 "bdev_nvme_detach_controller", 00:04:59.813 "bdev_nvme_get_controllers", 00:04:59.813 "bdev_nvme_attach_controller", 00:04:59.813 "bdev_nvme_set_hotplug", 00:04:59.813 "bdev_nvme_set_options", 00:04:59.813 "bdev_passthru_delete", 00:04:59.813 "bdev_passthru_create", 00:04:59.813 "bdev_lvol_set_parent_bdev", 00:04:59.813 "bdev_lvol_set_parent", 00:04:59.813 "bdev_lvol_check_shallow_copy", 00:04:59.813 "bdev_lvol_start_shallow_copy", 00:04:59.813 "bdev_lvol_grow_lvstore", 00:04:59.813 "bdev_lvol_get_lvols", 00:04:59.813 "bdev_lvol_get_lvstores", 00:04:59.813 "bdev_lvol_delete", 00:04:59.813 "bdev_lvol_set_read_only", 00:04:59.813 "bdev_lvol_resize", 00:04:59.813 "bdev_lvol_decouple_parent", 00:04:59.813 "bdev_lvol_inflate", 00:04:59.813 "bdev_lvol_rename", 00:04:59.813 "bdev_lvol_clone_bdev", 00:04:59.813 "bdev_lvol_clone", 00:04:59.813 "bdev_lvol_snapshot", 00:04:59.813 "bdev_lvol_create", 00:04:59.813 "bdev_lvol_delete_lvstore", 00:04:59.813 "bdev_lvol_rename_lvstore", 00:04:59.813 "bdev_lvol_create_lvstore", 00:04:59.813 "bdev_raid_set_options", 00:04:59.813 "bdev_raid_remove_base_bdev", 00:04:59.813 "bdev_raid_add_base_bdev", 00:04:59.813 "bdev_raid_delete", 00:04:59.813 "bdev_raid_create", 00:04:59.813 "bdev_raid_get_bdevs", 00:04:59.813 "bdev_error_inject_error", 00:04:59.813 "bdev_error_delete", 00:04:59.813 "bdev_error_create", 00:04:59.813 "bdev_split_delete", 00:04:59.813 "bdev_split_create", 00:04:59.813 "bdev_delay_delete", 00:04:59.813 "bdev_delay_create", 00:04:59.813 "bdev_delay_update_latency", 00:04:59.813 "bdev_zone_block_delete", 00:04:59.813 "bdev_zone_block_create", 00:04:59.813 "blobfs_create", 00:04:59.813 "blobfs_detect", 00:04:59.813 "blobfs_set_cache_size", 00:04:59.813 "bdev_xnvme_delete", 00:04:59.813 "bdev_xnvme_create", 00:04:59.813 "bdev_aio_delete", 00:04:59.813 "bdev_aio_rescan", 00:04:59.813 "bdev_aio_create", 00:04:59.813 "bdev_ftl_set_property", 00:04:59.813 "bdev_ftl_get_properties", 00:04:59.813 "bdev_ftl_get_stats", 00:04:59.813 "bdev_ftl_unmap", 00:04:59.813 "bdev_ftl_unload", 00:04:59.813 "bdev_ftl_delete", 00:04:59.813 "bdev_ftl_load", 00:04:59.813 "bdev_ftl_create", 00:04:59.813 "bdev_virtio_attach_controller", 00:04:59.813 "bdev_virtio_scsi_get_devices", 00:04:59.813 "bdev_virtio_detach_controller", 00:04:59.813 "bdev_virtio_blk_set_hotplug", 00:04:59.813 "bdev_iscsi_delete", 00:04:59.813 "bdev_iscsi_create", 00:04:59.813 "bdev_iscsi_set_options", 00:04:59.813 "accel_error_inject_error", 00:04:59.813 "ioat_scan_accel_module", 00:04:59.813 "dsa_scan_accel_module", 00:04:59.813 "iaa_scan_accel_module", 00:04:59.813 "keyring_file_remove_key", 00:04:59.813 "keyring_file_add_key", 00:04:59.813 "keyring_linux_set_options", 00:04:59.813 "fsdev_aio_delete", 00:04:59.813 "fsdev_aio_create", 00:04:59.813 "iscsi_get_histogram", 00:04:59.813 "iscsi_enable_histogram", 00:04:59.813 "iscsi_set_options", 00:04:59.813 "iscsi_get_auth_groups", 00:04:59.813 "iscsi_auth_group_remove_secret", 00:04:59.813 "iscsi_auth_group_add_secret", 00:04:59.813 "iscsi_delete_auth_group", 00:04:59.813 "iscsi_create_auth_group", 00:04:59.813 "iscsi_set_discovery_auth", 00:04:59.813 "iscsi_get_options", 00:04:59.813 "iscsi_target_node_request_logout", 00:04:59.813 "iscsi_target_node_set_redirect", 00:04:59.813 "iscsi_target_node_set_auth", 00:04:59.813 "iscsi_target_node_add_lun", 00:04:59.813 "iscsi_get_stats", 00:04:59.813 "iscsi_get_connections", 00:04:59.813 "iscsi_portal_group_set_auth", 00:04:59.813 "iscsi_start_portal_group", 00:04:59.813 "iscsi_delete_portal_group", 00:04:59.813 "iscsi_create_portal_group", 00:04:59.813 "iscsi_get_portal_groups", 00:04:59.813 "iscsi_delete_target_node", 00:04:59.813 "iscsi_target_node_remove_pg_ig_maps", 00:04:59.813 "iscsi_target_node_add_pg_ig_maps", 00:04:59.813 "iscsi_create_target_node", 00:04:59.813 "iscsi_get_target_nodes", 00:04:59.813 "iscsi_delete_initiator_group", 00:04:59.813 "iscsi_initiator_group_remove_initiators", 00:04:59.813 "iscsi_initiator_group_add_initiators", 00:04:59.813 "iscsi_create_initiator_group", 00:04:59.813 "iscsi_get_initiator_groups", 00:04:59.813 "nvmf_set_crdt", 00:04:59.813 "nvmf_set_config", 00:04:59.813 "nvmf_set_max_subsystems", 00:04:59.813 "nvmf_stop_mdns_prr", 00:04:59.813 "nvmf_publish_mdns_prr", 00:04:59.813 "nvmf_subsystem_get_listeners", 00:04:59.813 "nvmf_subsystem_get_qpairs", 00:04:59.813 "nvmf_subsystem_get_controllers", 00:04:59.813 "nvmf_get_stats", 00:04:59.813 "nvmf_get_transports", 00:04:59.813 "nvmf_create_transport", 00:04:59.813 "nvmf_get_targets", 00:04:59.813 "nvmf_delete_target", 00:04:59.813 "nvmf_create_target", 00:04:59.813 "nvmf_subsystem_allow_any_host", 00:04:59.813 "nvmf_subsystem_set_keys", 00:04:59.813 "nvmf_subsystem_remove_host", 00:04:59.813 "nvmf_subsystem_add_host", 00:04:59.813 "nvmf_ns_remove_host", 00:04:59.813 "nvmf_ns_add_host", 00:04:59.813 "nvmf_subsystem_remove_ns", 00:04:59.813 "nvmf_subsystem_set_ns_ana_group", 00:04:59.813 "nvmf_subsystem_add_ns", 00:04:59.814 "nvmf_subsystem_listener_set_ana_state", 00:04:59.814 "nvmf_discovery_get_referrals", 00:04:59.814 "nvmf_discovery_remove_referral", 00:04:59.814 "nvmf_discovery_add_referral", 00:04:59.814 "nvmf_subsystem_remove_listener", 00:04:59.814 "nvmf_subsystem_add_listener", 00:04:59.814 "nvmf_delete_subsystem", 00:04:59.814 "nvmf_create_subsystem", 00:04:59.814 "nvmf_get_subsystems", 00:04:59.814 "env_dpdk_get_mem_stats", 00:04:59.814 "nbd_get_disks", 00:04:59.814 "nbd_stop_disk", 00:04:59.814 "nbd_start_disk", 00:04:59.814 "ublk_recover_disk", 00:04:59.814 "ublk_get_disks", 00:04:59.814 "ublk_stop_disk", 00:04:59.814 "ublk_start_disk", 00:04:59.814 "ublk_destroy_target", 00:04:59.814 "ublk_create_target", 00:04:59.814 "virtio_blk_create_transport", 00:04:59.814 "virtio_blk_get_transports", 00:04:59.814 "vhost_controller_set_coalescing", 00:04:59.814 "vhost_get_controllers", 00:04:59.814 "vhost_delete_controller", 00:04:59.814 "vhost_create_blk_controller", 00:04:59.814 "vhost_scsi_controller_remove_target", 00:04:59.814 "vhost_scsi_controller_add_target", 00:04:59.814 "vhost_start_scsi_controller", 00:04:59.814 "vhost_create_scsi_controller", 00:04:59.814 "thread_set_cpumask", 00:04:59.814 "scheduler_set_options", 00:04:59.814 "framework_get_governor", 00:04:59.814 "framework_get_scheduler", 00:04:59.814 "framework_set_scheduler", 00:04:59.814 "framework_get_reactors", 00:04:59.814 "thread_get_io_channels", 00:04:59.814 "thread_get_pollers", 00:04:59.814 "thread_get_stats", 00:04:59.814 "framework_monitor_context_switch", 00:04:59.814 "spdk_kill_instance", 00:04:59.814 "log_enable_timestamps", 00:04:59.814 "log_get_flags", 00:04:59.814 "log_clear_flag", 00:04:59.814 "log_set_flag", 00:04:59.814 "log_get_level", 00:04:59.814 "log_set_level", 00:04:59.814 "log_get_print_level", 00:04:59.814 "log_set_print_level", 00:04:59.814 "framework_enable_cpumask_locks", 00:04:59.814 "framework_disable_cpumask_locks", 00:04:59.814 "framework_wait_init", 00:04:59.814 "framework_start_init", 00:04:59.814 "scsi_get_devices", 00:04:59.814 "bdev_get_histogram", 00:04:59.814 "bdev_enable_histogram", 00:04:59.814 "bdev_set_qos_limit", 00:04:59.814 "bdev_set_qd_sampling_period", 00:04:59.814 "bdev_get_bdevs", 00:04:59.814 "bdev_reset_iostat", 00:04:59.814 "bdev_get_iostat", 00:04:59.814 "bdev_examine", 00:04:59.814 "bdev_wait_for_examine", 00:04:59.814 "bdev_set_options", 00:04:59.814 "accel_get_stats", 00:04:59.814 "accel_set_options", 00:04:59.814 "accel_set_driver", 00:04:59.814 "accel_crypto_key_destroy", 00:04:59.814 "accel_crypto_keys_get", 00:04:59.814 "accel_crypto_key_create", 00:04:59.814 "accel_assign_opc", 00:04:59.814 "accel_get_module_info", 00:04:59.814 "accel_get_opc_assignments", 00:04:59.814 "vmd_rescan", 00:04:59.814 "vmd_remove_device", 00:04:59.814 "vmd_enable", 00:04:59.814 "sock_get_default_impl", 00:04:59.814 "sock_set_default_impl", 00:04:59.814 "sock_impl_set_options", 00:04:59.814 "sock_impl_get_options", 00:04:59.814 "iobuf_get_stats", 00:04:59.814 "iobuf_set_options", 00:04:59.814 "keyring_get_keys", 00:04:59.814 "framework_get_pci_devices", 00:04:59.814 "framework_get_config", 00:04:59.814 "framework_get_subsystems", 00:04:59.814 "fsdev_set_opts", 00:04:59.814 "fsdev_get_opts", 00:04:59.814 "trace_get_info", 00:04:59.814 "trace_get_tpoint_group_mask", 00:04:59.814 "trace_disable_tpoint_group", 00:04:59.814 "trace_enable_tpoint_group", 00:04:59.814 "trace_clear_tpoint_mask", 00:04:59.814 "trace_set_tpoint_mask", 00:04:59.814 "notify_get_notifications", 00:04:59.814 "notify_get_types", 00:04:59.814 "spdk_get_version", 00:04:59.814 "rpc_get_methods" 00:04:59.814 ] 00:04:59.814 01:30:43 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:59.814 01:30:43 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:59.814 01:30:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:59.814 01:30:43 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:59.814 01:30:43 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57957 00:04:59.814 01:30:43 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57957 ']' 00:04:59.814 01:30:43 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57957 00:04:59.814 01:30:43 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:59.814 01:30:43 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.814 01:30:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57957 00:04:59.814 killing process with pid 57957 00:04:59.814 01:30:43 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.814 01:30:43 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.814 01:30:43 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57957' 00:04:59.814 01:30:43 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57957 00:04:59.814 01:30:43 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57957 00:05:01.189 00:05:01.189 real 0m2.680s 00:05:01.189 user 0m4.827s 00:05:01.189 sys 0m0.425s 00:05:01.189 01:30:45 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.189 01:30:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.189 ************************************ 00:05:01.189 END TEST spdkcli_tcp 00:05:01.189 ************************************ 00:05:01.189 01:30:45 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:01.189 01:30:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.189 01:30:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.189 01:30:45 -- common/autotest_common.sh@10 -- # set +x 00:05:01.189 ************************************ 00:05:01.189 START TEST dpdk_mem_utility 00:05:01.189 ************************************ 00:05:01.189 01:30:45 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:01.189 * Looking for test storage... 00:05:01.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:01.189 01:30:45 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.189 01:30:45 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.189 01:30:45 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.448 01:30:45 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.448 01:30:45 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:01.448 01:30:45 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.448 01:30:45 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.448 --rc genhtml_branch_coverage=1 00:05:01.448 --rc genhtml_function_coverage=1 00:05:01.448 --rc genhtml_legend=1 00:05:01.448 --rc geninfo_all_blocks=1 00:05:01.448 --rc geninfo_unexecuted_blocks=1 00:05:01.448 00:05:01.448 ' 00:05:01.448 01:30:45 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.448 --rc genhtml_branch_coverage=1 00:05:01.448 --rc genhtml_function_coverage=1 00:05:01.448 --rc genhtml_legend=1 00:05:01.448 --rc geninfo_all_blocks=1 00:05:01.448 --rc geninfo_unexecuted_blocks=1 00:05:01.448 00:05:01.448 ' 00:05:01.448 01:30:45 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.448 --rc genhtml_branch_coverage=1 00:05:01.448 --rc genhtml_function_coverage=1 00:05:01.448 --rc genhtml_legend=1 00:05:01.448 --rc geninfo_all_blocks=1 00:05:01.448 --rc geninfo_unexecuted_blocks=1 00:05:01.448 00:05:01.448 ' 00:05:01.448 01:30:45 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.448 --rc genhtml_branch_coverage=1 00:05:01.448 --rc genhtml_function_coverage=1 00:05:01.448 --rc genhtml_legend=1 00:05:01.448 --rc geninfo_all_blocks=1 00:05:01.448 --rc geninfo_unexecuted_blocks=1 00:05:01.448 00:05:01.448 ' 00:05:01.448 01:30:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:01.448 01:30:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58063 00:05:01.448 01:30:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58063 00:05:01.448 01:30:45 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58063 ']' 00:05:01.448 01:30:45 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.448 01:30:45 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.448 01:30:45 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.448 01:30:45 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.448 01:30:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.448 01:30:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:01.448 [2024-11-21 01:30:45.269821] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:01.448 [2024-11-21 01:30:45.269946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58063 ] 00:05:01.706 [2024-11-21 01:30:45.430305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.706 [2024-11-21 01:30:45.527333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.275 01:30:46 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.275 01:30:46 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:02.276 01:30:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:02.276 01:30:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:02.276 01:30:46 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.276 01:30:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:02.276 { 00:05:02.276 "filename": "/tmp/spdk_mem_dump.txt" 00:05:02.276 } 00:05:02.276 01:30:46 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.276 01:30:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:02.276 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:02.276 1 heaps totaling size 816.000000 MiB 00:05:02.276 size: 816.000000 MiB heap id: 0 00:05:02.276 end heaps---------- 00:05:02.276 9 mempools totaling size 595.772034 MiB 00:05:02.276 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:02.276 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:02.276 size: 92.545471 MiB name: bdev_io_58063 00:05:02.276 size: 50.003479 MiB name: msgpool_58063 00:05:02.276 size: 36.509338 MiB name: fsdev_io_58063 00:05:02.276 size: 21.763794 MiB name: PDU_Pool 00:05:02.276 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:02.276 size: 4.133484 MiB name: evtpool_58063 00:05:02.276 size: 0.026123 MiB name: Session_Pool 00:05:02.276 end mempools------- 00:05:02.276 6 memzones totaling size 4.142822 MiB 00:05:02.276 size: 1.000366 MiB name: RG_ring_0_58063 00:05:02.276 size: 1.000366 MiB name: RG_ring_1_58063 00:05:02.276 size: 1.000366 MiB name: RG_ring_4_58063 00:05:02.276 size: 1.000366 MiB name: RG_ring_5_58063 00:05:02.276 size: 0.125366 MiB name: RG_ring_2_58063 00:05:02.276 size: 0.015991 MiB name: RG_ring_3_58063 00:05:02.276 end memzones------- 00:05:02.276 01:30:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:02.276 heap id: 0 total size: 816.000000 MiB number of busy elements: 325 number of free elements: 18 00:05:02.276 list of free elements. size: 16.788940 MiB 00:05:02.276 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:02.276 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:02.276 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:02.276 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:02.276 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:02.276 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:02.276 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:02.276 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:02.276 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:02.276 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:02.276 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:02.276 element at address: 0x20001ac00000 with size: 0.558533 MiB 00:05:02.276 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:02.276 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:02.276 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:02.276 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:02.276 element at address: 0x200028000000 with size: 0.391418 MiB 00:05:02.276 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:02.276 list of standard malloc elements. size: 199.290161 MiB 00:05:02.276 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:02.276 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:02.276 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:02.276 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:02.276 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:02.276 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:02.276 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:02.276 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:02.276 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:02.276 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:02.276 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:02.276 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:02.276 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:02.276 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:02.276 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:02.277 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:02.277 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac8efc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac8f0c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac8f1c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac8f2c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200028064340 with size: 0.000244 MiB 00:05:02.277 element at address: 0x200028064440 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20002806b100 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:02.277 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:02.278 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:02.278 list of memzone associated elements. size: 599.920898 MiB 00:05:02.278 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:02.278 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:02.278 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:02.278 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:02.278 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:02.278 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58063_0 00:05:02.278 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:02.278 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58063_0 00:05:02.278 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:02.278 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58063_0 00:05:02.278 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:02.278 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:02.278 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:02.278 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:02.278 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:02.278 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58063_0 00:05:02.278 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:02.278 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58063 00:05:02.278 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:02.278 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58063 00:05:02.278 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:02.278 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:02.278 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:02.278 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:02.278 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:02.278 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:02.278 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:02.278 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:02.278 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:02.278 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58063 00:05:02.278 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:02.278 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58063 00:05:02.278 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:02.278 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58063 00:05:02.278 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:02.278 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58063 00:05:02.278 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:02.278 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58063 00:05:02.278 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:02.278 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58063 00:05:02.278 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:02.278 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:02.278 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:02.278 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:02.278 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:02.278 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:02.278 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:02.278 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58063 00:05:02.278 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:02.278 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58063 00:05:02.278 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:02.278 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:02.278 element at address: 0x200028064540 with size: 0.023804 MiB 00:05:02.278 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:02.278 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:02.278 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58063 00:05:02.278 element at address: 0x20002806a6c0 with size: 0.002502 MiB 00:05:02.278 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:02.278 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:02.278 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58063 00:05:02.278 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:02.278 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58063 00:05:02.278 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:02.278 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58063 00:05:02.278 element at address: 0x20002806b200 with size: 0.000366 MiB 00:05:02.278 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:02.278 01:30:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:02.278 01:30:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58063 00:05:02.278 01:30:46 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58063 ']' 00:05:02.278 01:30:46 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58063 00:05:02.278 01:30:46 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:02.278 01:30:46 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.536 01:30:46 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58063 00:05:02.536 01:30:46 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.536 killing process with pid 58063 00:05:02.536 01:30:46 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.536 01:30:46 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58063' 00:05:02.536 01:30:46 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58063 00:05:02.536 01:30:46 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58063 00:05:03.909 00:05:03.909 real 0m2.606s 00:05:03.909 user 0m2.640s 00:05:03.909 sys 0m0.380s 00:05:03.909 ************************************ 00:05:03.909 END TEST dpdk_mem_utility 00:05:03.909 ************************************ 00:05:03.909 01:30:47 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.909 01:30:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.909 01:30:47 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:03.909 01:30:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.909 01:30:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.909 01:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:03.909 ************************************ 00:05:03.909 START TEST event 00:05:03.909 ************************************ 00:05:03.909 01:30:47 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:03.909 * Looking for test storage... 00:05:03.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:03.909 01:30:47 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.909 01:30:47 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.909 01:30:47 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.909 01:30:47 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.909 01:30:47 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.909 01:30:47 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.909 01:30:47 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.909 01:30:47 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.909 01:30:47 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.909 01:30:47 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.909 01:30:47 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.909 01:30:47 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.909 01:30:47 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.909 01:30:47 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.909 01:30:47 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.909 01:30:47 event -- scripts/common.sh@344 -- # case "$op" in 00:05:03.909 01:30:47 event -- scripts/common.sh@345 -- # : 1 00:05:03.909 01:30:47 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.909 01:30:47 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.909 01:30:47 event -- scripts/common.sh@365 -- # decimal 1 00:05:03.909 01:30:47 event -- scripts/common.sh@353 -- # local d=1 00:05:03.909 01:30:47 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.909 01:30:47 event -- scripts/common.sh@355 -- # echo 1 00:05:03.909 01:30:47 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.909 01:30:47 event -- scripts/common.sh@366 -- # decimal 2 00:05:03.909 01:30:47 event -- scripts/common.sh@353 -- # local d=2 00:05:03.909 01:30:47 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.909 01:30:47 event -- scripts/common.sh@355 -- # echo 2 00:05:03.910 01:30:47 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.910 01:30:47 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.910 01:30:47 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.910 01:30:47 event -- scripts/common.sh@368 -- # return 0 00:05:03.910 01:30:47 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.910 01:30:47 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.910 --rc genhtml_branch_coverage=1 00:05:03.910 --rc genhtml_function_coverage=1 00:05:03.910 --rc genhtml_legend=1 00:05:03.910 --rc geninfo_all_blocks=1 00:05:03.910 --rc geninfo_unexecuted_blocks=1 00:05:03.910 00:05:03.910 ' 00:05:03.910 01:30:47 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.910 --rc genhtml_branch_coverage=1 00:05:03.910 --rc genhtml_function_coverage=1 00:05:03.910 --rc genhtml_legend=1 00:05:03.910 --rc geninfo_all_blocks=1 00:05:03.910 --rc geninfo_unexecuted_blocks=1 00:05:03.910 00:05:03.910 ' 00:05:03.910 01:30:47 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.910 --rc genhtml_branch_coverage=1 00:05:03.910 --rc genhtml_function_coverage=1 00:05:03.910 --rc genhtml_legend=1 00:05:03.910 --rc geninfo_all_blocks=1 00:05:03.910 --rc geninfo_unexecuted_blocks=1 00:05:03.910 00:05:03.910 ' 00:05:03.910 01:30:47 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.910 --rc genhtml_branch_coverage=1 00:05:03.910 --rc genhtml_function_coverage=1 00:05:03.910 --rc genhtml_legend=1 00:05:03.910 --rc geninfo_all_blocks=1 00:05:03.910 --rc geninfo_unexecuted_blocks=1 00:05:03.910 00:05:03.910 ' 00:05:03.910 01:30:47 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:03.910 01:30:47 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:03.910 01:30:47 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:03.910 01:30:47 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:03.910 01:30:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.910 01:30:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.910 ************************************ 00:05:03.910 START TEST event_perf 00:05:03.910 ************************************ 00:05:03.910 01:30:47 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:04.167 Running I/O for 1 seconds...[2024-11-21 01:30:47.875052] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:04.167 [2024-11-21 01:30:47.875158] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58154 ] 00:05:04.167 [2024-11-21 01:30:48.031259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:04.167 [2024-11-21 01:30:48.114725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.167 [2024-11-21 01:30:48.115093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.167 Running I/O for 1 seconds...[2024-11-21 01:30:48.115235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.167 [2024-11-21 01:30:48.115252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.546 00:05:05.546 lcore 0: 206562 00:05:05.546 lcore 1: 206564 00:05:05.546 lcore 2: 206564 00:05:05.546 lcore 3: 206560 00:05:05.546 done. 00:05:05.546 ************************************ 00:05:05.546 00:05:05.546 real 0m1.401s 00:05:05.546 user 0m4.199s 00:05:05.546 sys 0m0.082s 00:05:05.546 01:30:49 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.546 01:30:49 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.546 END TEST event_perf 00:05:05.546 ************************************ 00:05:05.546 01:30:49 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:05.546 01:30:49 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:05.546 01:30:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.546 01:30:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.546 ************************************ 00:05:05.546 START TEST event_reactor 00:05:05.546 ************************************ 00:05:05.546 01:30:49 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:05.546 [2024-11-21 01:30:49.318585] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:05.547 [2024-11-21 01:30:49.318679] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58194 ] 00:05:05.547 [2024-11-21 01:30:49.468781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.808 [2024-11-21 01:30:49.550055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.750 test_start 00:05:06.750 oneshot 00:05:06.750 tick 100 00:05:06.750 tick 100 00:05:06.750 tick 250 00:05:06.750 tick 100 00:05:06.750 tick 100 00:05:06.750 tick 100 00:05:06.750 tick 250 00:05:06.750 tick 500 00:05:06.750 tick 100 00:05:06.750 tick 100 00:05:06.750 tick 250 00:05:06.750 tick 100 00:05:06.750 tick 100 00:05:06.750 test_end 00:05:06.750 00:05:06.750 real 0m1.378s 00:05:06.750 user 0m1.209s 00:05:06.750 sys 0m0.061s 00:05:06.750 01:30:50 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.750 ************************************ 00:05:06.750 END TEST event_reactor 00:05:06.750 ************************************ 00:05:06.750 01:30:50 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:06.750 01:30:50 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:06.750 01:30:50 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:06.750 01:30:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.750 01:30:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.008 ************************************ 00:05:07.008 START TEST event_reactor_perf 00:05:07.008 ************************************ 00:05:07.008 01:30:50 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.008 [2024-11-21 01:30:50.736562] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:07.008 [2024-11-21 01:30:50.736683] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58229 ] 00:05:07.008 [2024-11-21 01:30:50.897934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.265 [2024-11-21 01:30:50.994423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.197 test_start 00:05:08.197 test_end 00:05:08.197 Performance: 317674 events per second 00:05:08.197 00:05:08.197 real 0m1.434s 00:05:08.197 user 0m1.270s 00:05:08.197 sys 0m0.057s 00:05:08.197 01:30:52 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.197 01:30:52 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:08.197 ************************************ 00:05:08.197 END TEST event_reactor_perf 00:05:08.197 ************************************ 00:05:08.456 01:30:52 event -- event/event.sh@49 -- # uname -s 00:05:08.456 01:30:52 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:08.456 01:30:52 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:08.456 01:30:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.456 01:30:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.456 01:30:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.456 ************************************ 00:05:08.456 START TEST event_scheduler 00:05:08.456 ************************************ 00:05:08.456 01:30:52 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:08.456 * Looking for test storage... 00:05:08.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:08.456 01:30:52 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.456 01:30:52 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.456 01:30:52 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.456 01:30:52 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.456 01:30:52 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:08.456 01:30:52 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.456 01:30:52 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.456 --rc genhtml_branch_coverage=1 00:05:08.456 --rc genhtml_function_coverage=1 00:05:08.456 --rc genhtml_legend=1 00:05:08.456 --rc geninfo_all_blocks=1 00:05:08.456 --rc geninfo_unexecuted_blocks=1 00:05:08.456 00:05:08.456 ' 00:05:08.456 01:30:52 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.456 --rc genhtml_branch_coverage=1 00:05:08.456 --rc genhtml_function_coverage=1 00:05:08.456 --rc genhtml_legend=1 00:05:08.456 --rc geninfo_all_blocks=1 00:05:08.456 --rc geninfo_unexecuted_blocks=1 00:05:08.456 00:05:08.456 ' 00:05:08.456 01:30:52 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.456 --rc genhtml_branch_coverage=1 00:05:08.456 --rc genhtml_function_coverage=1 00:05:08.456 --rc genhtml_legend=1 00:05:08.456 --rc geninfo_all_blocks=1 00:05:08.456 --rc geninfo_unexecuted_blocks=1 00:05:08.456 00:05:08.456 ' 00:05:08.456 01:30:52 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.456 --rc genhtml_branch_coverage=1 00:05:08.456 --rc genhtml_function_coverage=1 00:05:08.456 --rc genhtml_legend=1 00:05:08.456 --rc geninfo_all_blocks=1 00:05:08.456 --rc geninfo_unexecuted_blocks=1 00:05:08.456 00:05:08.456 ' 00:05:08.456 01:30:52 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:08.456 01:30:52 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58301 00:05:08.456 01:30:52 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.456 01:30:52 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58301 00:05:08.456 01:30:52 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58301 ']' 00:05:08.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.456 01:30:52 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.456 01:30:52 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.456 01:30:52 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.456 01:30:52 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.456 01:30:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:08.456 01:30:52 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:08.456 [2024-11-21 01:30:52.388457] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:08.456 [2024-11-21 01:30:52.388586] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58301 ] 00:05:08.714 [2024-11-21 01:30:52.547801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:08.714 [2024-11-21 01:30:52.649599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.714 [2024-11-21 01:30:52.649797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.714 [2024-11-21 01:30:52.650080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:08.714 [2024-11-21 01:30:52.650084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.281 01:30:53 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.281 01:30:53 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:09.281 01:30:53 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:09.281 01:30:53 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.281 01:30:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.281 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.281 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.281 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.281 POWER: Cannot set governor of lcore 0 to performance 00:05:09.281 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.281 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.281 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:09.281 POWER: Cannot set governor of lcore 0 to userspace 00:05:09.281 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:09.281 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:09.281 POWER: Unable to set Power Management Environment for lcore 0 00:05:09.281 [2024-11-21 01:30:53.195854] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:09.281 [2024-11-21 01:30:53.195879] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:09.281 [2024-11-21 01:30:53.195889] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:09.281 [2024-11-21 01:30:53.195905] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:09.281 [2024-11-21 01:30:53.195913] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:09.281 [2024-11-21 01:30:53.195921] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:09.281 01:30:53 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.281 01:30:53 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:09.281 01:30:53 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.281 01:30:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.539 [2024-11-21 01:30:53.417067] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:09.539 01:30:53 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.539 01:30:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:09.539 01:30:53 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.539 01:30:53 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.539 01:30:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.539 ************************************ 00:05:09.539 START TEST scheduler_create_thread 00:05:09.539 ************************************ 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.539 2 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.539 3 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.539 4 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.539 5 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.539 6 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.539 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.539 7 00:05:09.540 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.540 01:30:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:09.540 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.540 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.798 8 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.798 9 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.798 10 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.798 01:30:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.366 ************************************ 00:05:10.366 END TEST scheduler_create_thread 00:05:10.366 ************************************ 00:05:10.366 01:30:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.366 00:05:10.366 real 0m0.598s 00:05:10.366 user 0m0.014s 00:05:10.366 sys 0m0.005s 00:05:10.366 01:30:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.366 01:30:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.366 01:30:54 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:10.366 01:30:54 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58301 00:05:10.366 01:30:54 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58301 ']' 00:05:10.366 01:30:54 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58301 00:05:10.366 01:30:54 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:10.366 01:30:54 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.366 01:30:54 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58301 00:05:10.366 01:30:54 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:10.366 killing process with pid 58301 00:05:10.366 01:30:54 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:10.366 01:30:54 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58301' 00:05:10.366 01:30:54 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58301 00:05:10.366 01:30:54 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58301 00:05:10.649 [2024-11-21 01:30:54.506250] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:11.247 00:05:11.247 real 0m2.885s 00:05:11.247 user 0m5.357s 00:05:11.247 sys 0m0.349s 00:05:11.247 01:30:55 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.247 ************************************ 00:05:11.247 END TEST event_scheduler 00:05:11.247 ************************************ 00:05:11.247 01:30:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.247 01:30:55 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:11.247 01:30:55 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:11.247 01:30:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.247 01:30:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.247 01:30:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.247 ************************************ 00:05:11.247 START TEST app_repeat 00:05:11.247 ************************************ 00:05:11.247 01:30:55 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:11.247 01:30:55 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.247 01:30:55 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.247 01:30:55 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:11.247 01:30:55 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:11.247 01:30:55 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:11.247 01:30:55 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:11.247 01:30:55 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:11.247 Process app_repeat pid: 58379 00:05:11.247 spdk_app_start Round 0 00:05:11.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.247 01:30:55 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58379 00:05:11.247 01:30:55 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.247 01:30:55 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:11.247 01:30:55 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58379' 00:05:11.247 01:30:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:11.247 01:30:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:11.247 01:30:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58379 /var/tmp/spdk-nbd.sock 00:05:11.247 01:30:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58379 ']' 00:05:11.247 01:30:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.247 01:30:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.247 01:30:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.247 01:30:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.247 01:30:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.247 [2024-11-21 01:30:55.155047] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:11.247 [2024-11-21 01:30:55.155235] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58379 ] 00:05:11.505 [2024-11-21 01:30:55.307895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.505 [2024-11-21 01:30:55.407230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.505 [2024-11-21 01:30:55.407379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.072 01:30:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.072 01:30:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:12.072 01:30:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.330 Malloc0 00:05:12.330 01:30:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.588 Malloc1 00:05:12.589 01:30:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.589 01:30:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.589 01:30:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.589 01:30:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.589 01:30:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.589 01:30:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.589 01:30:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.589 01:30:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.589 01:30:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.589 01:30:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.589 01:30:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.589 01:30:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.589 01:30:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:12.589 01:30:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.589 01:30:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.589 01:30:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.847 /dev/nbd0 00:05:12.847 01:30:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.847 01:30:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.847 01:30:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:12.847 01:30:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:12.847 01:30:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:12.847 01:30:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:12.847 01:30:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:12.847 01:30:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:12.847 01:30:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:12.847 01:30:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:12.847 01:30:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.847 1+0 records in 00:05:12.847 1+0 records out 00:05:12.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440063 s, 9.3 MB/s 00:05:12.847 01:30:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.847 01:30:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:12.847 01:30:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.847 01:30:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:12.847 01:30:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:12.847 01:30:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.847 01:30:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.847 01:30:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:13.106 /dev/nbd1 00:05:13.106 01:30:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:13.106 01:30:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:13.106 01:30:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:13.106 01:30:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:13.106 01:30:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:13.106 01:30:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:13.106 01:30:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:13.106 01:30:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:13.106 01:30:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:13.106 01:30:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:13.106 01:30:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.106 1+0 records in 00:05:13.106 1+0 records out 00:05:13.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292866 s, 14.0 MB/s 00:05:13.106 01:30:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.106 01:30:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:13.106 01:30:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.106 01:30:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:13.106 01:30:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:13.106 01:30:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.106 01:30:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.106 01:30:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.106 01:30:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.106 01:30:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:13.365 { 00:05:13.365 "nbd_device": "/dev/nbd0", 00:05:13.365 "bdev_name": "Malloc0" 00:05:13.365 }, 00:05:13.365 { 00:05:13.365 "nbd_device": "/dev/nbd1", 00:05:13.365 "bdev_name": "Malloc1" 00:05:13.365 } 00:05:13.365 ]' 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:13.365 { 00:05:13.365 "nbd_device": "/dev/nbd0", 00:05:13.365 "bdev_name": "Malloc0" 00:05:13.365 }, 00:05:13.365 { 00:05:13.365 "nbd_device": "/dev/nbd1", 00:05:13.365 "bdev_name": "Malloc1" 00:05:13.365 } 00:05:13.365 ]' 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:13.365 /dev/nbd1' 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:13.365 /dev/nbd1' 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.365 256+0 records in 00:05:13.365 256+0 records out 00:05:13.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011193 s, 93.7 MB/s 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:13.365 256+0 records in 00:05:13.365 256+0 records out 00:05:13.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198003 s, 53.0 MB/s 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:13.365 256+0 records in 00:05:13.365 256+0 records out 00:05:13.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211922 s, 49.5 MB/s 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.365 01:30:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.625 01:30:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.625 01:30:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.625 01:30:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.625 01:30:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.625 01:30:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.625 01:30:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.625 01:30:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.625 01:30:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.625 01:30:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.625 01:30:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.884 01:30:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.884 01:30:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.884 01:30:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.884 01:30:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.884 01:30:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.884 01:30:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.884 01:30:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.884 01:30:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.884 01:30:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.884 01:30:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.884 01:30:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.884 01:30:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.884 01:30:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.884 01:30:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.142 01:30:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:14.142 01:30:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.142 01:30:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:14.142 01:30:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:14.142 01:30:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:14.142 01:30:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:14.142 01:30:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:14.142 01:30:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:14.142 01:30:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:14.142 01:30:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.400 01:30:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:14.965 [2024-11-21 01:30:58.729497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.965 [2024-11-21 01:30:58.804240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.965 [2024-11-21 01:30:58.804241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.965 [2024-11-21 01:30:58.906764] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:14.965 [2024-11-21 01:30:58.906814] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.494 spdk_app_start Round 1 00:05:17.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.494 01:31:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:17.494 01:31:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:17.494 01:31:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58379 /var/tmp/spdk-nbd.sock 00:05:17.494 01:31:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58379 ']' 00:05:17.494 01:31:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.494 01:31:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.494 01:31:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.494 01:31:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.494 01:31:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.494 01:31:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.494 01:31:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:17.494 01:31:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.752 Malloc0 00:05:17.752 01:31:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.011 Malloc1 00:05:18.011 01:31:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.011 01:31:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.011 01:31:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.011 01:31:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:18.011 01:31:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.011 01:31:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:18.011 01:31:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.011 01:31:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.011 01:31:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.011 01:31:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:18.011 01:31:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.011 01:31:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:18.011 01:31:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:18.011 01:31:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:18.011 01:31:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.011 01:31:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:18.270 /dev/nbd0 00:05:18.270 01:31:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:18.270 01:31:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:18.270 01:31:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:18.270 01:31:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:18.270 01:31:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:18.270 01:31:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:18.270 01:31:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:18.270 01:31:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:18.270 01:31:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:18.270 01:31:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:18.270 01:31:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.270 1+0 records in 00:05:18.270 1+0 records out 00:05:18.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457249 s, 9.0 MB/s 00:05:18.270 01:31:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.270 01:31:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:18.270 01:31:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.270 01:31:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:18.270 01:31:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:18.270 01:31:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.270 01:31:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.270 01:31:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.529 /dev/nbd1 00:05:18.529 01:31:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.529 01:31:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.529 01:31:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:18.529 01:31:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:18.529 01:31:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:18.529 01:31:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:18.529 01:31:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:18.529 01:31:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:18.529 01:31:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:18.529 01:31:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:18.529 01:31:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.529 1+0 records in 00:05:18.529 1+0 records out 00:05:18.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000150105 s, 27.3 MB/s 00:05:18.529 01:31:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.529 01:31:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:18.529 01:31:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.529 01:31:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:18.529 01:31:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:18.529 01:31:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.529 01:31:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.529 01:31:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.529 01:31:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.529 01:31:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.529 01:31:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.529 { 00:05:18.529 "nbd_device": "/dev/nbd0", 00:05:18.529 "bdev_name": "Malloc0" 00:05:18.529 }, 00:05:18.529 { 00:05:18.529 "nbd_device": "/dev/nbd1", 00:05:18.529 "bdev_name": "Malloc1" 00:05:18.529 } 00:05:18.529 ]' 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.788 { 00:05:18.788 "nbd_device": "/dev/nbd0", 00:05:18.788 "bdev_name": "Malloc0" 00:05:18.788 }, 00:05:18.788 { 00:05:18.788 "nbd_device": "/dev/nbd1", 00:05:18.788 "bdev_name": "Malloc1" 00:05:18.788 } 00:05:18.788 ]' 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.788 /dev/nbd1' 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.788 /dev/nbd1' 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.788 256+0 records in 00:05:18.788 256+0 records out 00:05:18.788 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00749709 s, 140 MB/s 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.788 256+0 records in 00:05:18.788 256+0 records out 00:05:18.788 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018829 s, 55.7 MB/s 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.788 256+0 records in 00:05:18.788 256+0 records out 00:05:18.788 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166136 s, 63.1 MB/s 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.788 01:31:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:19.047 01:31:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:19.047 01:31:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:19.047 01:31:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:19.047 01:31:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.047 01:31:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.047 01:31:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:19.047 01:31:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.047 01:31:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.047 01:31:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.047 01:31:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:19.047 01:31:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:19.047 01:31:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:19.047 01:31:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:19.047 01:31:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.047 01:31:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.047 01:31:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:19.047 01:31:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.047 01:31:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.305 01:31:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.305 01:31:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.305 01:31:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.305 01:31:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.305 01:31:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.305 01:31:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.305 01:31:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.306 01:31:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.306 01:31:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.306 01:31:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:19.306 01:31:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.306 01:31:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.306 01:31:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.306 01:31:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.306 01:31:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.306 01:31:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.978 01:31:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:20.238 [2024-11-21 01:31:04.079457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.238 [2024-11-21 01:31:04.157223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.238 [2024-11-21 01:31:04.157310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.497 [2024-11-21 01:31:04.253089] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.497 [2024-11-21 01:31:04.253139] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:23.032 01:31:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:23.032 spdk_app_start Round 2 00:05:23.032 01:31:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:23.032 01:31:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58379 /var/tmp/spdk-nbd.sock 00:05:23.032 01:31:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58379 ']' 00:05:23.032 01:31:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.032 01:31:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.032 01:31:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.032 01:31:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.032 01:31:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.032 01:31:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.032 01:31:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:23.032 01:31:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.032 Malloc0 00:05:23.032 01:31:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.290 Malloc1 00:05:23.290 01:31:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.290 01:31:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.290 01:31:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.290 01:31:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.290 01:31:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.290 01:31:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.290 01:31:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.290 01:31:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.290 01:31:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.290 01:31:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.290 01:31:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.290 01:31:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.290 01:31:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.290 01:31:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.290 01:31:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.290 01:31:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.548 /dev/nbd0 00:05:23.548 01:31:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.548 01:31:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.548 01:31:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:23.548 01:31:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.548 01:31:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.548 01:31:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.548 01:31:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:23.548 01:31:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.548 01:31:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.548 01:31:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.548 01:31:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.548 1+0 records in 00:05:23.548 1+0 records out 00:05:23.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170417 s, 24.0 MB/s 00:05:23.548 01:31:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.548 01:31:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.548 01:31:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.548 01:31:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.548 01:31:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.548 01:31:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.548 01:31:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.548 01:31:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.807 /dev/nbd1 00:05:23.807 01:31:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.807 01:31:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.807 01:31:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:23.807 01:31:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.807 01:31:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.807 01:31:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.807 01:31:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:23.807 01:31:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.807 01:31:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.807 01:31:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.807 01:31:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.807 1+0 records in 00:05:23.807 1+0 records out 00:05:23.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167598 s, 24.4 MB/s 00:05:23.807 01:31:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.807 01:31:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.807 01:31:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.807 01:31:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.807 01:31:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.807 01:31:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.807 01:31:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.807 01:31:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.807 01:31:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.807 01:31:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.066 { 00:05:24.066 "nbd_device": "/dev/nbd0", 00:05:24.066 "bdev_name": "Malloc0" 00:05:24.066 }, 00:05:24.066 { 00:05:24.066 "nbd_device": "/dev/nbd1", 00:05:24.066 "bdev_name": "Malloc1" 00:05:24.066 } 00:05:24.066 ]' 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.066 { 00:05:24.066 "nbd_device": "/dev/nbd0", 00:05:24.066 "bdev_name": "Malloc0" 00:05:24.066 }, 00:05:24.066 { 00:05:24.066 "nbd_device": "/dev/nbd1", 00:05:24.066 "bdev_name": "Malloc1" 00:05:24.066 } 00:05:24.066 ]' 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.066 /dev/nbd1' 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.066 /dev/nbd1' 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.066 256+0 records in 00:05:24.066 256+0 records out 00:05:24.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513751 s, 204 MB/s 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.066 256+0 records in 00:05:24.066 256+0 records out 00:05:24.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136666 s, 76.7 MB/s 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.066 256+0 records in 00:05:24.066 256+0 records out 00:05:24.066 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159104 s, 65.9 MB/s 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.066 01:31:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.325 01:31:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.325 01:31:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.325 01:31:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.325 01:31:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.325 01:31:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.325 01:31:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.325 01:31:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.325 01:31:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.325 01:31:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.325 01:31:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.585 01:31:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.585 01:31:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.585 01:31:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.585 01:31:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.585 01:31:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.585 01:31:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.585 01:31:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.585 01:31:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.585 01:31:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.585 01:31:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.585 01:31:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.846 01:31:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.846 01:31:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.846 01:31:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.846 01:31:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.846 01:31:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.846 01:31:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.846 01:31:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:24.846 01:31:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.846 01:31:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.846 01:31:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.846 01:31:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.846 01:31:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.846 01:31:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.105 01:31:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:25.672 [2024-11-21 01:31:09.451519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.672 [2024-11-21 01:31:09.529368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.672 [2024-11-21 01:31:09.529480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.930 [2024-11-21 01:31:09.631396] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.930 [2024-11-21 01:31:09.631448] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.462 01:31:11 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58379 /var/tmp/spdk-nbd.sock 00:05:28.462 01:31:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58379 ']' 00:05:28.462 01:31:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.462 01:31:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.462 01:31:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.462 01:31:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.462 01:31:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.462 01:31:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.462 01:31:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:28.462 01:31:12 event.app_repeat -- event/event.sh@39 -- # killprocess 58379 00:05:28.462 01:31:12 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58379 ']' 00:05:28.462 01:31:12 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58379 00:05:28.462 01:31:12 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:28.462 01:31:12 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.462 01:31:12 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58379 00:05:28.462 01:31:12 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.462 01:31:12 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.462 01:31:12 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58379' 00:05:28.462 killing process with pid 58379 00:05:28.462 01:31:12 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58379 00:05:28.462 01:31:12 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58379 00:05:28.720 spdk_app_start is called in Round 0. 00:05:28.720 Shutdown signal received, stop current app iteration 00:05:28.720 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 reinitialization... 00:05:28.720 spdk_app_start is called in Round 1. 00:05:28.720 Shutdown signal received, stop current app iteration 00:05:28.720 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 reinitialization... 00:05:28.720 spdk_app_start is called in Round 2. 00:05:28.720 Shutdown signal received, stop current app iteration 00:05:28.720 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 reinitialization... 00:05:28.720 spdk_app_start is called in Round 3. 00:05:28.720 Shutdown signal received, stop current app iteration 00:05:28.720 01:31:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:28.720 01:31:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:28.720 00:05:28.720 real 0m17.537s 00:05:28.720 user 0m38.364s 00:05:28.720 sys 0m2.050s 00:05:28.720 01:31:12 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.720 01:31:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.720 ************************************ 00:05:28.720 END TEST app_repeat 00:05:28.720 ************************************ 00:05:28.996 01:31:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:28.996 01:31:12 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:28.996 01:31:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.996 01:31:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.996 01:31:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.996 ************************************ 00:05:28.996 START TEST cpu_locks 00:05:28.996 ************************************ 00:05:28.996 01:31:12 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:28.996 * Looking for test storage... 00:05:28.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:28.996 01:31:12 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:28.996 01:31:12 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:28.996 01:31:12 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:28.996 01:31:12 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:28.996 01:31:12 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.996 01:31:12 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.997 01:31:12 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:28.997 01:31:12 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.997 01:31:12 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:28.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.997 --rc genhtml_branch_coverage=1 00:05:28.997 --rc genhtml_function_coverage=1 00:05:28.997 --rc genhtml_legend=1 00:05:28.997 --rc geninfo_all_blocks=1 00:05:28.997 --rc geninfo_unexecuted_blocks=1 00:05:28.997 00:05:28.997 ' 00:05:28.997 01:31:12 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:28.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.997 --rc genhtml_branch_coverage=1 00:05:28.997 --rc genhtml_function_coverage=1 00:05:28.997 --rc genhtml_legend=1 00:05:28.997 --rc geninfo_all_blocks=1 00:05:28.997 --rc geninfo_unexecuted_blocks=1 00:05:28.997 00:05:28.997 ' 00:05:28.997 01:31:12 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:28.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.997 --rc genhtml_branch_coverage=1 00:05:28.997 --rc genhtml_function_coverage=1 00:05:28.997 --rc genhtml_legend=1 00:05:28.997 --rc geninfo_all_blocks=1 00:05:28.997 --rc geninfo_unexecuted_blocks=1 00:05:28.997 00:05:28.997 ' 00:05:28.997 01:31:12 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:28.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.997 --rc genhtml_branch_coverage=1 00:05:28.997 --rc genhtml_function_coverage=1 00:05:28.997 --rc genhtml_legend=1 00:05:28.997 --rc geninfo_all_blocks=1 00:05:28.997 --rc geninfo_unexecuted_blocks=1 00:05:28.997 00:05:28.997 ' 00:05:28.997 01:31:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:28.997 01:31:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:28.997 01:31:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:28.997 01:31:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:28.997 01:31:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.997 01:31:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.997 01:31:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.997 ************************************ 00:05:28.997 START TEST default_locks 00:05:28.997 ************************************ 00:05:28.997 01:31:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:28.997 01:31:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58810 00:05:28.997 01:31:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58810 00:05:28.997 01:31:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58810 ']' 00:05:28.997 01:31:12 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.997 01:31:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.997 01:31:12 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.997 01:31:12 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.997 01:31:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.997 01:31:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.997 [2024-11-21 01:31:12.916840] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:28.997 [2024-11-21 01:31:12.916965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58810 ] 00:05:29.262 [2024-11-21 01:31:13.066889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.262 [2024-11-21 01:31:13.149501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.833 01:31:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.833 01:31:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:29.833 01:31:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58810 00:05:29.833 01:31:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58810 00:05:29.833 01:31:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.094 01:31:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58810 00:05:30.094 01:31:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58810 ']' 00:05:30.094 01:31:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58810 00:05:30.094 01:31:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:30.094 01:31:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.094 01:31:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58810 00:05:30.094 01:31:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.094 01:31:13 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.094 killing process with pid 58810 00:05:30.094 01:31:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58810' 00:05:30.094 01:31:13 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58810 00:05:30.094 01:31:13 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58810 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58810 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58810 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58810 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58810 ']' 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.480 ERROR: process (pid: 58810) is no longer running 00:05:31.480 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58810) - No such process 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:31.480 00:05:31.480 real 0m2.274s 00:05:31.480 user 0m2.279s 00:05:31.480 sys 0m0.416s 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.480 01:31:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.480 ************************************ 00:05:31.480 END TEST default_locks 00:05:31.480 ************************************ 00:05:31.480 01:31:15 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:31.480 01:31:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.480 01:31:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.480 01:31:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.480 ************************************ 00:05:31.480 START TEST default_locks_via_rpc 00:05:31.480 ************************************ 00:05:31.480 01:31:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:31.480 01:31:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58863 00:05:31.480 01:31:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58863 00:05:31.480 01:31:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58863 ']' 00:05:31.480 01:31:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.480 01:31:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.480 01:31:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.481 01:31:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.481 01:31:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.481 01:31:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.481 [2024-11-21 01:31:15.226016] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:31.481 [2024-11-21 01:31:15.226134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58863 ] 00:05:31.481 [2024-11-21 01:31:15.382263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.741 [2024-11-21 01:31:15.462251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58863 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58863 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58863 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58863 ']' 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58863 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.314 01:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58863 00:05:32.575 01:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.575 01:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.575 killing process with pid 58863 00:05:32.575 01:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58863' 00:05:32.575 01:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58863 00:05:32.575 01:31:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58863 00:05:33.518 00:05:33.518 real 0m2.272s 00:05:33.518 user 0m2.301s 00:05:33.518 sys 0m0.403s 00:05:33.518 01:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.518 01:31:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.518 ************************************ 00:05:33.518 END TEST default_locks_via_rpc 00:05:33.518 ************************************ 00:05:33.518 01:31:17 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:33.518 01:31:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.518 01:31:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.518 01:31:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.518 ************************************ 00:05:33.518 START TEST non_locking_app_on_locked_coremask 00:05:33.518 ************************************ 00:05:33.518 01:31:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:33.518 01:31:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58915 00:05:33.518 01:31:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58915 /var/tmp/spdk.sock 00:05:33.518 01:31:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58915 ']' 00:05:33.518 01:31:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.518 01:31:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.518 01:31:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.518 01:31:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.518 01:31:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.518 01:31:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.779 [2024-11-21 01:31:17.537125] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:33.779 [2024-11-21 01:31:17.537234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58915 ] 00:05:33.779 [2024-11-21 01:31:17.697989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.039 [2024-11-21 01:31:17.792837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.611 01:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.611 01:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:34.611 01:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58931 00:05:34.611 01:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58931 /var/tmp/spdk2.sock 00:05:34.611 01:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58931 ']' 00:05:34.611 01:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.611 01:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.611 01:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.611 01:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.611 01:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.611 01:31:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:34.611 [2024-11-21 01:31:18.443777] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:34.611 [2024-11-21 01:31:18.444571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58931 ] 00:05:34.872 [2024-11-21 01:31:18.616835] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.872 [2024-11-21 01:31:18.616883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.872 [2024-11-21 01:31:18.817139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.257 01:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.257 01:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:36.257 01:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58915 00:05:36.257 01:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58915 00:05:36.257 01:31:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.257 01:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58915 00:05:36.257 01:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58915 ']' 00:05:36.257 01:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58915 00:05:36.257 01:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:36.257 01:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.257 01:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58915 00:05:36.518 01:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.518 01:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.518 killing process with pid 58915 00:05:36.518 01:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58915' 00:05:36.518 01:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58915 00:05:36.518 01:31:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58915 00:05:39.054 01:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58931 00:05:39.054 01:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58931 ']' 00:05:39.054 01:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58931 00:05:39.054 01:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:39.054 01:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.054 01:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58931 00:05:39.054 01:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.054 killing process with pid 58931 00:05:39.054 01:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.054 01:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58931' 00:05:39.054 01:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58931 00:05:39.054 01:31:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58931 00:05:39.996 00:05:39.996 real 0m6.281s 00:05:39.996 user 0m6.530s 00:05:39.996 sys 0m0.813s 00:05:39.996 01:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.996 01:31:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.996 ************************************ 00:05:39.996 END TEST non_locking_app_on_locked_coremask 00:05:39.996 ************************************ 00:05:39.996 01:31:23 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:39.996 01:31:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.996 01:31:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.996 01:31:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.996 ************************************ 00:05:39.996 START TEST locking_app_on_unlocked_coremask 00:05:39.996 ************************************ 00:05:39.996 01:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:39.996 01:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59033 00:05:39.996 01:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59033 /var/tmp/spdk.sock 00:05:39.996 01:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59033 ']' 00:05:39.996 01:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:39.996 01:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.996 01:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.996 01:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.996 01:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.997 01:31:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.997 [2024-11-21 01:31:23.847122] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:39.997 [2024-11-21 01:31:23.847222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59033 ] 00:05:40.255 [2024-11-21 01:31:23.997480] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.255 [2024-11-21 01:31:23.997522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.255 [2024-11-21 01:31:24.075045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.821 01:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.821 01:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:40.821 01:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59038 00:05:40.821 01:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59038 /var/tmp/spdk2.sock 00:05:40.821 01:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59038 ']' 00:05:40.821 01:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:40.821 01:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.821 01:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.821 01:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.821 01:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.821 01:31:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.821 [2024-11-21 01:31:24.718535] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:40.821 [2024-11-21 01:31:24.718665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59038 ] 00:05:41.079 [2024-11-21 01:31:24.882350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.337 [2024-11-21 01:31:25.043072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.271 01:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.271 01:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:42.271 01:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59038 00:05:42.271 01:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.271 01:31:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59038 00:05:42.530 01:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59033 00:05:42.530 01:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59033 ']' 00:05:42.530 01:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59033 00:05:42.530 01:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:42.530 01:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.530 01:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59033 00:05:42.530 01:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.530 killing process with pid 59033 00:05:42.530 01:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.530 01:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59033' 00:05:42.530 01:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59033 00:05:42.530 01:31:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59033 00:05:45.062 01:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59038 00:05:45.062 01:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59038 ']' 00:05:45.062 01:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59038 00:05:45.062 01:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:45.062 01:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.062 01:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59038 00:05:45.062 01:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.062 killing process with pid 59038 00:05:45.062 01:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.062 01:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59038' 00:05:45.062 01:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59038 00:05:45.062 01:31:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59038 00:05:45.996 00:05:45.996 real 0m6.033s 00:05:45.996 user 0m6.249s 00:05:45.996 sys 0m0.779s 00:05:45.996 01:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.996 01:31:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.996 ************************************ 00:05:45.996 END TEST locking_app_on_unlocked_coremask 00:05:45.996 ************************************ 00:05:45.996 01:31:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:45.996 01:31:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.996 01:31:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.996 01:31:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.996 ************************************ 00:05:45.996 START TEST locking_app_on_locked_coremask 00:05:45.996 ************************************ 00:05:45.996 01:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:45.996 01:31:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59135 00:05:45.996 01:31:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59135 /var/tmp/spdk.sock 00:05:45.996 01:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59135 ']' 00:05:45.996 01:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.996 01:31:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.996 01:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.996 01:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.996 01:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.996 01:31:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.996 [2024-11-21 01:31:29.928627] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:45.996 [2024-11-21 01:31:29.929137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59135 ] 00:05:46.254 [2024-11-21 01:31:30.085161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.254 [2024-11-21 01:31:30.164004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.820 01:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.820 01:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:46.820 01:31:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59145 00:05:46.820 01:31:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59145 /var/tmp/spdk2.sock 00:05:46.820 01:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:46.820 01:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59145 /var/tmp/spdk2.sock 00:05:46.820 01:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:46.820 01:31:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:46.820 01:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.820 01:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:46.820 01:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.820 01:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59145 /var/tmp/spdk2.sock 00:05:46.820 01:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59145 ']' 00:05:46.820 01:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.820 01:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.820 01:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.820 01:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.820 01:31:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.110 [2024-11-21 01:31:30.822809] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:47.110 [2024-11-21 01:31:30.822896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59145 ] 00:05:47.110 [2024-11-21 01:31:30.981657] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59135 has claimed it. 00:05:47.110 [2024-11-21 01:31:30.981708] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:47.675 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59145) - No such process 00:05:47.675 ERROR: process (pid: 59145) is no longer running 00:05:47.675 01:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.675 01:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:47.675 01:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:47.675 01:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:47.675 01:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:47.675 01:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:47.675 01:31:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59135 00:05:47.675 01:31:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.675 01:31:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59135 00:05:47.933 01:31:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59135 00:05:47.933 01:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59135 ']' 00:05:47.933 01:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59135 00:05:47.933 01:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:47.933 01:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.933 01:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59135 00:05:47.933 01:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.933 01:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.933 killing process with pid 59135 00:05:47.933 01:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59135' 00:05:47.933 01:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59135 00:05:47.933 01:31:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59135 00:05:49.308 00:05:49.308 real 0m2.986s 00:05:49.308 user 0m3.211s 00:05:49.308 sys 0m0.508s 00:05:49.308 01:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.308 ************************************ 00:05:49.308 END TEST locking_app_on_locked_coremask 00:05:49.308 ************************************ 00:05:49.308 01:31:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.308 01:31:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:49.308 01:31:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.308 01:31:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.308 01:31:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.308 ************************************ 00:05:49.308 START TEST locking_overlapped_coremask 00:05:49.308 ************************************ 00:05:49.308 01:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:49.308 01:31:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59204 00:05:49.308 01:31:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59204 /var/tmp/spdk.sock 00:05:49.308 01:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59204 ']' 00:05:49.308 01:31:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:49.308 01:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.308 01:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.308 01:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.309 01:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.309 01:31:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.309 [2024-11-21 01:31:32.952522] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:49.309 [2024-11-21 01:31:32.952665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59204 ] 00:05:49.309 [2024-11-21 01:31:33.108347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.309 [2024-11-21 01:31:33.189901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.309 [2024-11-21 01:31:33.190201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.309 [2024-11-21 01:31:33.190229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.875 01:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.875 01:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:49.875 01:31:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59216 00:05:49.875 01:31:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59216 /var/tmp/spdk2.sock 00:05:49.875 01:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:49.875 01:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59216 /var/tmp/spdk2.sock 00:05:49.875 01:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:49.875 01:31:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:49.875 01:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.875 01:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:49.875 01:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.875 01:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59216 /var/tmp/spdk2.sock 00:05:49.875 01:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59216 ']' 00:05:49.875 01:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.875 01:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.875 01:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.875 01:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.875 01:31:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.133 [2024-11-21 01:31:33.854133] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:50.133 [2024-11-21 01:31:33.854259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59216 ] 00:05:50.133 [2024-11-21 01:31:34.019536] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59204 has claimed it. 00:05:50.133 [2024-11-21 01:31:34.019587] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:50.700 ERROR: process (pid: 59216) is no longer running 00:05:50.700 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59216) - No such process 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59204 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59204 ']' 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59204 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59204 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.700 killing process with pid 59204 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59204' 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59204 00:05:50.700 01:31:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59204 00:05:52.075 00:05:52.075 real 0m2.820s 00:05:52.075 user 0m7.704s 00:05:52.075 sys 0m0.407s 00:05:52.075 01:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.075 01:31:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.075 ************************************ 00:05:52.075 END TEST locking_overlapped_coremask 00:05:52.075 ************************************ 00:05:52.075 01:31:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:52.075 01:31:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.075 01:31:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.075 01:31:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.075 ************************************ 00:05:52.075 START TEST locking_overlapped_coremask_via_rpc 00:05:52.075 ************************************ 00:05:52.075 01:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:52.075 01:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59269 00:05:52.075 01:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:52.075 01:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59269 /var/tmp/spdk.sock 00:05:52.075 01:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59269 ']' 00:05:52.075 01:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.076 01:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.076 01:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.076 01:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.076 01:31:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.076 [2024-11-21 01:31:35.820219] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:52.076 [2024-11-21 01:31:35.820354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59269 ] 00:05:52.076 [2024-11-21 01:31:35.983621] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.076 [2024-11-21 01:31:35.983660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.334 [2024-11-21 01:31:36.067085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.334 [2024-11-21 01:31:36.067145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.334 [2024-11-21 01:31:36.067158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.900 01:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.900 01:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:52.900 01:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59287 00:05:52.900 01:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59287 /var/tmp/spdk2.sock 00:05:52.900 01:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59287 ']' 00:05:52.900 01:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:52.900 01:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.900 01:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.900 01:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.900 01:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.900 01:31:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.900 [2024-11-21 01:31:36.691814] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:52.900 [2024-11-21 01:31:36.691933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59287 ] 00:05:53.158 [2024-11-21 01:31:36.864945] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:53.158 [2024-11-21 01:31:36.864996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.158 [2024-11-21 01:31:37.070778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:53.158 [2024-11-21 01:31:37.070933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.158 [2024-11-21 01:31:37.070959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.533 [2024-11-21 01:31:38.223743] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59269 has claimed it. 00:05:54.533 request: 00:05:54.533 { 00:05:54.533 "method": "framework_enable_cpumask_locks", 00:05:54.533 "req_id": 1 00:05:54.533 } 00:05:54.533 Got JSON-RPC error response 00:05:54.533 response: 00:05:54.533 { 00:05:54.533 "code": -32603, 00:05:54.533 "message": "Failed to claim CPU core: 2" 00:05:54.533 } 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59269 /var/tmp/spdk.sock 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59269 ']' 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59287 /var/tmp/spdk2.sock 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59287 ']' 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.533 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.792 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.792 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:54.792 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:54.792 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:54.792 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:54.792 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:54.792 00:05:54.792 real 0m2.906s 00:05:54.792 user 0m1.049s 00:05:54.792 sys 0m0.117s 00:05:54.792 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.792 01:31:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.792 ************************************ 00:05:54.792 END TEST locking_overlapped_coremask_via_rpc 00:05:54.792 ************************************ 00:05:54.792 01:31:38 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:54.792 01:31:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59269 ]] 00:05:54.792 01:31:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59269 00:05:54.792 01:31:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59269 ']' 00:05:54.792 01:31:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59269 00:05:54.792 01:31:38 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:54.792 01:31:38 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.792 01:31:38 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59269 00:05:54.792 01:31:38 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.792 01:31:38 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.792 01:31:38 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59269' 00:05:54.792 killing process with pid 59269 00:05:54.792 01:31:38 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59269 00:05:54.792 01:31:38 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59269 00:05:56.165 01:31:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59287 ]] 00:05:56.165 01:31:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59287 00:05:56.165 01:31:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59287 ']' 00:05:56.165 01:31:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59287 00:05:56.165 01:31:39 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:56.165 01:31:39 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.165 01:31:39 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59287 00:05:56.165 01:31:39 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:56.165 01:31:39 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:56.165 killing process with pid 59287 00:05:56.165 01:31:39 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59287' 00:05:56.165 01:31:39 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59287 00:05:56.165 01:31:39 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59287 00:05:57.170 01:31:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:57.170 01:31:41 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:57.170 01:31:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59269 ]] 00:05:57.170 01:31:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59269 00:05:57.170 01:31:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59269 ']' 00:05:57.170 01:31:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59269 00:05:57.170 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59269) - No such process 00:05:57.170 Process with pid 59269 is not found 00:05:57.170 01:31:41 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59269 is not found' 00:05:57.170 01:31:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59287 ]] 00:05:57.170 01:31:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59287 00:05:57.170 01:31:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59287 ']' 00:05:57.170 01:31:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59287 00:05:57.170 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59287) - No such process 00:05:57.170 Process with pid 59287 is not found 00:05:57.170 01:31:41 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59287 is not found' 00:05:57.170 01:31:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:57.170 00:05:57.170 real 0m28.368s 00:05:57.170 user 0m49.282s 00:05:57.170 sys 0m4.255s 00:05:57.170 01:31:41 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.170 01:31:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.170 ************************************ 00:05:57.170 END TEST cpu_locks 00:05:57.170 ************************************ 00:05:57.170 00:05:57.170 real 0m53.390s 00:05:57.170 user 1m39.835s 00:05:57.170 sys 0m7.079s 00:05:57.170 01:31:41 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.170 01:31:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.170 ************************************ 00:05:57.170 END TEST event 00:05:57.170 ************************************ 00:05:57.170 01:31:41 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:57.170 01:31:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.170 01:31:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.170 01:31:41 -- common/autotest_common.sh@10 -- # set +x 00:05:57.429 ************************************ 00:05:57.429 START TEST thread 00:05:57.429 ************************************ 00:05:57.429 01:31:41 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:57.429 * Looking for test storage... 00:05:57.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:57.429 01:31:41 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:57.429 01:31:41 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:57.429 01:31:41 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:57.429 01:31:41 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:57.429 01:31:41 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.429 01:31:41 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.429 01:31:41 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.429 01:31:41 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.429 01:31:41 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.429 01:31:41 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.429 01:31:41 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.429 01:31:41 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.429 01:31:41 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.429 01:31:41 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.429 01:31:41 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.429 01:31:41 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:57.429 01:31:41 thread -- scripts/common.sh@345 -- # : 1 00:05:57.429 01:31:41 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.429 01:31:41 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.429 01:31:41 thread -- scripts/common.sh@365 -- # decimal 1 00:05:57.429 01:31:41 thread -- scripts/common.sh@353 -- # local d=1 00:05:57.429 01:31:41 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.429 01:31:41 thread -- scripts/common.sh@355 -- # echo 1 00:05:57.429 01:31:41 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.429 01:31:41 thread -- scripts/common.sh@366 -- # decimal 2 00:05:57.429 01:31:41 thread -- scripts/common.sh@353 -- # local d=2 00:05:57.429 01:31:41 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.429 01:31:41 thread -- scripts/common.sh@355 -- # echo 2 00:05:57.429 01:31:41 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.430 01:31:41 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.430 01:31:41 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.430 01:31:41 thread -- scripts/common.sh@368 -- # return 0 00:05:57.430 01:31:41 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.430 01:31:41 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:57.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.430 --rc genhtml_branch_coverage=1 00:05:57.430 --rc genhtml_function_coverage=1 00:05:57.430 --rc genhtml_legend=1 00:05:57.430 --rc geninfo_all_blocks=1 00:05:57.430 --rc geninfo_unexecuted_blocks=1 00:05:57.430 00:05:57.430 ' 00:05:57.430 01:31:41 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:57.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.430 --rc genhtml_branch_coverage=1 00:05:57.430 --rc genhtml_function_coverage=1 00:05:57.430 --rc genhtml_legend=1 00:05:57.430 --rc geninfo_all_blocks=1 00:05:57.430 --rc geninfo_unexecuted_blocks=1 00:05:57.430 00:05:57.430 ' 00:05:57.430 01:31:41 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:57.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.430 --rc genhtml_branch_coverage=1 00:05:57.430 --rc genhtml_function_coverage=1 00:05:57.430 --rc genhtml_legend=1 00:05:57.430 --rc geninfo_all_blocks=1 00:05:57.430 --rc geninfo_unexecuted_blocks=1 00:05:57.430 00:05:57.430 ' 00:05:57.430 01:31:41 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:57.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.430 --rc genhtml_branch_coverage=1 00:05:57.430 --rc genhtml_function_coverage=1 00:05:57.430 --rc genhtml_legend=1 00:05:57.430 --rc geninfo_all_blocks=1 00:05:57.430 --rc geninfo_unexecuted_blocks=1 00:05:57.430 00:05:57.430 ' 00:05:57.430 01:31:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:57.430 01:31:41 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:57.430 01:31:41 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.430 01:31:41 thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.430 ************************************ 00:05:57.430 START TEST thread_poller_perf 00:05:57.430 ************************************ 00:05:57.430 01:31:41 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:57.430 [2024-11-21 01:31:41.289156] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:57.430 [2024-11-21 01:31:41.289564] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59442 ] 00:05:57.688 [2024-11-21 01:31:41.441769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.688 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:57.688 [2024-11-21 01:31:41.538430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.063 [2024-11-21T01:31:43.020Z] ====================================== 00:05:59.063 [2024-11-21T01:31:43.020Z] busy:2614226828 (cyc) 00:05:59.063 [2024-11-21T01:31:43.020Z] total_run_count: 302000 00:05:59.063 [2024-11-21T01:31:43.020Z] tsc_hz: 2600000000 (cyc) 00:05:59.063 [2024-11-21T01:31:43.020Z] ====================================== 00:05:59.063 [2024-11-21T01:31:43.020Z] poller_cost: 8656 (cyc), 3329 (nsec) 00:05:59.063 00:05:59.063 real 0m1.436s 00:05:59.063 user 0m1.269s 00:05:59.063 sys 0m0.060s 00:05:59.063 01:31:42 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.063 ************************************ 00:05:59.063 END TEST thread_poller_perf 00:05:59.063 ************************************ 00:05:59.063 01:31:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:59.063 01:31:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:59.063 01:31:42 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:59.063 01:31:42 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.063 01:31:42 thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.063 ************************************ 00:05:59.063 START TEST thread_poller_perf 00:05:59.063 ************************************ 00:05:59.063 01:31:42 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:59.063 [2024-11-21 01:31:42.778503] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:05:59.063 [2024-11-21 01:31:42.778622] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59478 ] 00:05:59.063 [2024-11-21 01:31:42.935031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.063 [2024-11-21 01:31:43.016307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.321 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:00.256 [2024-11-21T01:31:44.214Z] ====================================== 00:06:00.257 [2024-11-21T01:31:44.214Z] busy:2602948984 (cyc) 00:06:00.257 [2024-11-21T01:31:44.214Z] total_run_count: 5200000 00:06:00.257 [2024-11-21T01:31:44.214Z] tsc_hz: 2600000000 (cyc) 00:06:00.257 [2024-11-21T01:31:44.214Z] ====================================== 00:06:00.257 [2024-11-21T01:31:44.214Z] poller_cost: 500 (cyc), 192 (nsec) 00:06:00.257 00:06:00.257 real 0m1.395s 00:06:00.257 user 0m1.223s 00:06:00.257 sys 0m0.066s 00:06:00.257 01:31:44 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.257 ************************************ 00:06:00.257 END TEST thread_poller_perf 00:06:00.257 ************************************ 00:06:00.257 01:31:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.257 01:31:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:00.257 00:06:00.257 real 0m3.052s 00:06:00.257 user 0m2.588s 00:06:00.257 sys 0m0.250s 00:06:00.257 01:31:44 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.257 01:31:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.257 ************************************ 00:06:00.257 END TEST thread 00:06:00.257 ************************************ 00:06:00.257 01:31:44 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:00.257 01:31:44 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:00.257 01:31:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.257 01:31:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.257 01:31:44 -- common/autotest_common.sh@10 -- # set +x 00:06:00.517 ************************************ 00:06:00.517 START TEST app_cmdline 00:06:00.517 ************************************ 00:06:00.517 01:31:44 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:00.517 * Looking for test storage... 00:06:00.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:00.517 01:31:44 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.517 01:31:44 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.517 01:31:44 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.517 01:31:44 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:00.517 01:31:44 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.518 01:31:44 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:00.518 01:31:44 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.518 01:31:44 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.518 01:31:44 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.518 01:31:44 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:00.518 01:31:44 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.518 01:31:44 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.518 --rc genhtml_branch_coverage=1 00:06:00.518 --rc genhtml_function_coverage=1 00:06:00.518 --rc genhtml_legend=1 00:06:00.518 --rc geninfo_all_blocks=1 00:06:00.518 --rc geninfo_unexecuted_blocks=1 00:06:00.518 00:06:00.518 ' 00:06:00.518 01:31:44 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.518 --rc genhtml_branch_coverage=1 00:06:00.518 --rc genhtml_function_coverage=1 00:06:00.518 --rc genhtml_legend=1 00:06:00.518 --rc geninfo_all_blocks=1 00:06:00.518 --rc geninfo_unexecuted_blocks=1 00:06:00.518 00:06:00.518 ' 00:06:00.518 01:31:44 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.518 --rc genhtml_branch_coverage=1 00:06:00.518 --rc genhtml_function_coverage=1 00:06:00.518 --rc genhtml_legend=1 00:06:00.518 --rc geninfo_all_blocks=1 00:06:00.518 --rc geninfo_unexecuted_blocks=1 00:06:00.518 00:06:00.518 ' 00:06:00.518 01:31:44 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.518 --rc genhtml_branch_coverage=1 00:06:00.518 --rc genhtml_function_coverage=1 00:06:00.518 --rc genhtml_legend=1 00:06:00.518 --rc geninfo_all_blocks=1 00:06:00.518 --rc geninfo_unexecuted_blocks=1 00:06:00.518 00:06:00.518 ' 00:06:00.518 01:31:44 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:00.518 01:31:44 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59562 00:06:00.518 01:31:44 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59562 00:06:00.518 01:31:44 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59562 ']' 00:06:00.518 01:31:44 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.518 01:31:44 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.518 01:31:44 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.518 01:31:44 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.518 01:31:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:00.518 01:31:44 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:00.518 [2024-11-21 01:31:44.408394] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:00.518 [2024-11-21 01:31:44.408496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59562 ] 00:06:00.776 [2024-11-21 01:31:44.559483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.776 [2024-11-21 01:31:44.641524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.342 01:31:45 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.342 01:31:45 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:01.342 01:31:45 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:01.600 { 00:06:01.600 "version": "SPDK v25.01-pre git sha1 557f022f6", 00:06:01.600 "fields": { 00:06:01.600 "major": 25, 00:06:01.600 "minor": 1, 00:06:01.600 "patch": 0, 00:06:01.600 "suffix": "-pre", 00:06:01.600 "commit": "557f022f6" 00:06:01.600 } 00:06:01.600 } 00:06:01.600 01:31:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:01.600 01:31:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:01.600 01:31:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:01.600 01:31:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:01.600 01:31:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:01.600 01:31:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:01.600 01:31:45 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.600 01:31:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:01.600 01:31:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:01.600 01:31:45 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.600 01:31:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:01.600 01:31:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:01.600 01:31:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:01.600 01:31:45 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:01.600 01:31:45 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:01.600 01:31:45 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:01.600 01:31:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.600 01:31:45 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:01.600 01:31:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.600 01:31:45 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:01.600 01:31:45 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.600 01:31:45 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:01.600 01:31:45 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:01.600 01:31:45 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:01.908 request: 00:06:01.908 { 00:06:01.908 "method": "env_dpdk_get_mem_stats", 00:06:01.908 "req_id": 1 00:06:01.908 } 00:06:01.908 Got JSON-RPC error response 00:06:01.908 response: 00:06:01.908 { 00:06:01.908 "code": -32601, 00:06:01.908 "message": "Method not found" 00:06:01.908 } 00:06:01.908 01:31:45 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:01.908 01:31:45 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.908 01:31:45 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.908 01:31:45 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.908 01:31:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59562 00:06:01.908 01:31:45 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59562 ']' 00:06:01.908 01:31:45 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59562 00:06:01.908 01:31:45 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:01.908 01:31:45 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.908 01:31:45 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59562 00:06:01.908 killing process with pid 59562 00:06:01.908 01:31:45 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.908 01:31:45 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.908 01:31:45 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59562' 00:06:01.908 01:31:45 app_cmdline -- common/autotest_common.sh@973 -- # kill 59562 00:06:01.908 01:31:45 app_cmdline -- common/autotest_common.sh@978 -- # wait 59562 00:06:03.286 ************************************ 00:06:03.286 END TEST app_cmdline 00:06:03.286 ************************************ 00:06:03.286 00:06:03.286 real 0m2.662s 00:06:03.286 user 0m2.999s 00:06:03.286 sys 0m0.399s 00:06:03.286 01:31:46 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.286 01:31:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:03.286 01:31:46 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:03.286 01:31:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.286 01:31:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.286 01:31:46 -- common/autotest_common.sh@10 -- # set +x 00:06:03.286 ************************************ 00:06:03.286 START TEST version 00:06:03.286 ************************************ 00:06:03.286 01:31:46 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:03.286 * Looking for test storage... 00:06:03.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:03.286 01:31:46 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.286 01:31:46 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.286 01:31:46 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.286 01:31:47 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.287 01:31:47 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.287 01:31:47 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.287 01:31:47 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.287 01:31:47 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.287 01:31:47 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.287 01:31:47 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.287 01:31:47 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.287 01:31:47 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.287 01:31:47 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.287 01:31:47 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.287 01:31:47 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.287 01:31:47 version -- scripts/common.sh@344 -- # case "$op" in 00:06:03.287 01:31:47 version -- scripts/common.sh@345 -- # : 1 00:06:03.287 01:31:47 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.287 01:31:47 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.287 01:31:47 version -- scripts/common.sh@365 -- # decimal 1 00:06:03.287 01:31:47 version -- scripts/common.sh@353 -- # local d=1 00:06:03.287 01:31:47 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.287 01:31:47 version -- scripts/common.sh@355 -- # echo 1 00:06:03.287 01:31:47 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.287 01:31:47 version -- scripts/common.sh@366 -- # decimal 2 00:06:03.287 01:31:47 version -- scripts/common.sh@353 -- # local d=2 00:06:03.287 01:31:47 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.287 01:31:47 version -- scripts/common.sh@355 -- # echo 2 00:06:03.287 01:31:47 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.287 01:31:47 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.287 01:31:47 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.287 01:31:47 version -- scripts/common.sh@368 -- # return 0 00:06:03.287 01:31:47 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.287 01:31:47 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.287 --rc genhtml_branch_coverage=1 00:06:03.287 --rc genhtml_function_coverage=1 00:06:03.287 --rc genhtml_legend=1 00:06:03.287 --rc geninfo_all_blocks=1 00:06:03.287 --rc geninfo_unexecuted_blocks=1 00:06:03.287 00:06:03.287 ' 00:06:03.287 01:31:47 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.287 --rc genhtml_branch_coverage=1 00:06:03.287 --rc genhtml_function_coverage=1 00:06:03.287 --rc genhtml_legend=1 00:06:03.287 --rc geninfo_all_blocks=1 00:06:03.287 --rc geninfo_unexecuted_blocks=1 00:06:03.287 00:06:03.287 ' 00:06:03.287 01:31:47 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.287 --rc genhtml_branch_coverage=1 00:06:03.287 --rc genhtml_function_coverage=1 00:06:03.287 --rc genhtml_legend=1 00:06:03.287 --rc geninfo_all_blocks=1 00:06:03.287 --rc geninfo_unexecuted_blocks=1 00:06:03.287 00:06:03.287 ' 00:06:03.287 01:31:47 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.287 --rc genhtml_branch_coverage=1 00:06:03.287 --rc genhtml_function_coverage=1 00:06:03.287 --rc genhtml_legend=1 00:06:03.287 --rc geninfo_all_blocks=1 00:06:03.287 --rc geninfo_unexecuted_blocks=1 00:06:03.287 00:06:03.287 ' 00:06:03.287 01:31:47 version -- app/version.sh@17 -- # get_header_version major 00:06:03.287 01:31:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:03.287 01:31:47 version -- app/version.sh@14 -- # cut -f2 00:06:03.287 01:31:47 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.287 01:31:47 version -- app/version.sh@17 -- # major=25 00:06:03.287 01:31:47 version -- app/version.sh@18 -- # get_header_version minor 00:06:03.287 01:31:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:03.287 01:31:47 version -- app/version.sh@14 -- # cut -f2 00:06:03.287 01:31:47 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.287 01:31:47 version -- app/version.sh@18 -- # minor=1 00:06:03.287 01:31:47 version -- app/version.sh@19 -- # get_header_version patch 00:06:03.287 01:31:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:03.287 01:31:47 version -- app/version.sh@14 -- # cut -f2 00:06:03.287 01:31:47 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.287 01:31:47 version -- app/version.sh@19 -- # patch=0 00:06:03.287 01:31:47 version -- app/version.sh@20 -- # get_header_version suffix 00:06:03.287 01:31:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:03.287 01:31:47 version -- app/version.sh@14 -- # tr -d '"' 00:06:03.287 01:31:47 version -- app/version.sh@14 -- # cut -f2 00:06:03.287 01:31:47 version -- app/version.sh@20 -- # suffix=-pre 00:06:03.287 01:31:47 version -- app/version.sh@22 -- # version=25.1 00:06:03.287 01:31:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:03.287 01:31:47 version -- app/version.sh@28 -- # version=25.1rc0 00:06:03.287 01:31:47 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:03.287 01:31:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:03.287 01:31:47 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:03.287 01:31:47 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:03.287 ************************************ 00:06:03.287 END TEST version 00:06:03.287 ************************************ 00:06:03.287 00:06:03.287 real 0m0.177s 00:06:03.287 user 0m0.116s 00:06:03.287 sys 0m0.086s 00:06:03.287 01:31:47 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.287 01:31:47 version -- common/autotest_common.sh@10 -- # set +x 00:06:03.287 01:31:47 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:03.287 01:31:47 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:03.287 01:31:47 -- spdk/autotest.sh@194 -- # uname -s 00:06:03.287 01:31:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:03.287 01:31:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:03.287 01:31:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:03.287 01:31:47 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:03.287 01:31:47 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:03.287 01:31:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:03.287 01:31:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.287 01:31:47 -- common/autotest_common.sh@10 -- # set +x 00:06:03.287 ************************************ 00:06:03.287 START TEST blockdev_nvme 00:06:03.287 ************************************ 00:06:03.287 01:31:47 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:03.287 * Looking for test storage... 00:06:03.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:03.287 01:31:47 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.287 01:31:47 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.287 01:31:47 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.544 01:31:47 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.544 01:31:47 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:03.544 01:31:47 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.544 01:31:47 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.544 --rc genhtml_branch_coverage=1 00:06:03.544 --rc genhtml_function_coverage=1 00:06:03.544 --rc genhtml_legend=1 00:06:03.544 --rc geninfo_all_blocks=1 00:06:03.544 --rc geninfo_unexecuted_blocks=1 00:06:03.544 00:06:03.544 ' 00:06:03.545 01:31:47 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.545 --rc genhtml_branch_coverage=1 00:06:03.545 --rc genhtml_function_coverage=1 00:06:03.545 --rc genhtml_legend=1 00:06:03.545 --rc geninfo_all_blocks=1 00:06:03.545 --rc geninfo_unexecuted_blocks=1 00:06:03.545 00:06:03.545 ' 00:06:03.545 01:31:47 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.545 --rc genhtml_branch_coverage=1 00:06:03.545 --rc genhtml_function_coverage=1 00:06:03.545 --rc genhtml_legend=1 00:06:03.545 --rc geninfo_all_blocks=1 00:06:03.545 --rc geninfo_unexecuted_blocks=1 00:06:03.545 00:06:03.545 ' 00:06:03.545 01:31:47 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.545 --rc genhtml_branch_coverage=1 00:06:03.545 --rc genhtml_function_coverage=1 00:06:03.545 --rc genhtml_legend=1 00:06:03.545 --rc geninfo_all_blocks=1 00:06:03.545 --rc geninfo_unexecuted_blocks=1 00:06:03.545 00:06:03.545 ' 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:03.545 01:31:47 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59734 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:03.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59734 00:06:03.545 01:31:47 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59734 ']' 00:06:03.545 01:31:47 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.545 01:31:47 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.545 01:31:47 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.545 01:31:47 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:03.545 01:31:47 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.545 01:31:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:03.545 [2024-11-21 01:31:47.348292] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:03.545 [2024-11-21 01:31:47.348416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59734 ] 00:06:03.803 [2024-11-21 01:31:47.501955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.803 [2024-11-21 01:31:47.598316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.368 01:31:48 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.368 01:31:48 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:04.368 01:31:48 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:04.368 01:31:48 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:04.368 01:31:48 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:04.368 01:31:48 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:04.368 01:31:48 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:04.368 01:31:48 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:04.368 01:31:48 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.368 01:31:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:04.627 01:31:48 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.627 01:31:48 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:04.627 01:31:48 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.627 01:31:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:04.627 01:31:48 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.627 01:31:48 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:04.627 01:31:48 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:04.627 01:31:48 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.627 01:31:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:04.627 01:31:48 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.627 01:31:48 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:04.627 01:31:48 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.627 01:31:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:04.627 01:31:48 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.627 01:31:48 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:04.627 01:31:48 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.627 01:31:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:04.886 01:31:48 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.886 01:31:48 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:04.886 01:31:48 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:04.887 01:31:48 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.887 01:31:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:04.887 01:31:48 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:04.887 01:31:48 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.887 01:31:48 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:04.887 01:31:48 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:04.887 01:31:48 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "bd4d3b25-def9-4905-ba11-11867ade2c86"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "bd4d3b25-def9-4905-ba11-11867ade2c86",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "07f9527d-fe8a-4e30-ae8f-4f00afbcfb83"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "07f9527d-fe8a-4e30-ae8f-4f00afbcfb83",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "c0205c2b-dba8-4d3d-bb02-8972d50b8b29"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c0205c2b-dba8-4d3d-bb02-8972d50b8b29",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "88b1f3a3-4303-4d01-940e-efb1732f59d3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "88b1f3a3-4303-4d01-940e-efb1732f59d3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "8e0099dd-7235-49db-a585-34dc13fa6d7b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8e0099dd-7235-49db-a585-34dc13fa6d7b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "3e3d6cff-6743-457d-9411-285f7ba02c05"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3e3d6cff-6743-457d-9411-285f7ba02c05",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:04.887 01:31:48 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:04.887 01:31:48 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:04.887 01:31:48 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:04.887 01:31:48 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59734 00:06:04.887 01:31:48 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59734 ']' 00:06:04.887 01:31:48 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59734 00:06:04.887 01:31:48 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:04.887 01:31:48 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.887 01:31:48 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59734 00:06:04.887 01:31:48 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.887 01:31:48 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.887 killing process with pid 59734 00:06:04.887 01:31:48 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59734' 00:06:04.887 01:31:48 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59734 00:06:04.887 01:31:48 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59734 00:06:06.346 01:31:50 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:06.346 01:31:50 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:06.346 01:31:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:06.346 01:31:50 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.346 01:31:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.346 ************************************ 00:06:06.346 START TEST bdev_hello_world 00:06:06.346 ************************************ 00:06:06.346 01:31:50 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:06.346 [2024-11-21 01:31:50.261349] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:06.346 [2024-11-21 01:31:50.261469] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59817 ] 00:06:06.605 [2024-11-21 01:31:50.420725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.605 [2024-11-21 01:31:50.520014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.173 [2024-11-21 01:31:51.057479] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:07.173 [2024-11-21 01:31:51.057530] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:07.173 [2024-11-21 01:31:51.057552] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:07.173 [2024-11-21 01:31:51.059964] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:07.173 [2024-11-21 01:31:51.060633] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:07.173 [2024-11-21 01:31:51.060662] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:07.173 [2024-11-21 01:31:51.061171] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:07.173 00:06:07.173 [2024-11-21 01:31:51.061200] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:08.108 ************************************ 00:06:08.108 END TEST bdev_hello_world 00:06:08.108 ************************************ 00:06:08.108 00:06:08.108 real 0m1.559s 00:06:08.108 user 0m1.278s 00:06:08.108 sys 0m0.173s 00:06:08.108 01:31:51 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.108 01:31:51 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:08.108 01:31:51 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:08.108 01:31:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:08.108 01:31:51 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.108 01:31:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.108 ************************************ 00:06:08.108 START TEST bdev_bounds 00:06:08.108 ************************************ 00:06:08.108 Process bdevio pid: 59854 00:06:08.108 01:31:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:08.108 01:31:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59854 00:06:08.108 01:31:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.108 01:31:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59854' 00:06:08.108 01:31:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59854 00:06:08.108 01:31:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59854 ']' 00:06:08.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.108 01:31:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.108 01:31:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:08.108 01:31:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.108 01:31:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.108 01:31:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.108 01:31:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:08.108 [2024-11-21 01:31:51.877350] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:08.108 [2024-11-21 01:31:51.877461] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59854 ] 00:06:08.108 [2024-11-21 01:31:52.039163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.367 [2024-11-21 01:31:52.138013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.367 [2024-11-21 01:31:52.138257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.367 [2024-11-21 01:31:52.138271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.933 01:31:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.933 01:31:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:08.933 01:31:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:08.933 I/O targets: 00:06:08.933 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:08.933 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:08.933 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:08.933 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:08.933 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:08.933 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:08.933 00:06:08.933 00:06:08.933 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.933 http://cunit.sourceforge.net/ 00:06:08.933 00:06:08.933 00:06:08.933 Suite: bdevio tests on: Nvme3n1 00:06:08.933 Test: blockdev write read block ...passed 00:06:08.933 Test: blockdev write zeroes read block ...passed 00:06:08.933 Test: blockdev write zeroes read no split ...passed 00:06:09.191 Test: blockdev write zeroes read split ...passed 00:06:09.191 Test: blockdev write zeroes read split partial ...passed 00:06:09.191 Test: blockdev reset ...[2024-11-21 01:31:52.911129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:09.191 [2024-11-21 01:31:52.917141] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:06:09.191 Test: blockdev write read 8 blocks ...uccessful. 00:06:09.191 passed 00:06:09.191 Test: blockdev write read size > 128k ...passed 00:06:09.191 Test: blockdev write read invalid size ...passed 00:06:09.191 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:09.191 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:09.191 Test: blockdev write read max offset ...passed 00:06:09.191 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:09.191 Test: blockdev writev readv 8 blocks ...passed 00:06:09.191 Test: blockdev writev readv 30 x 1block ...passed 00:06:09.191 Test: blockdev writev readv block ...passed 00:06:09.191 Test: blockdev writev readv size > 128k ...passed 00:06:09.191 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:09.191 Test: blockdev comparev and writev ...[2024-11-21 01:31:52.936588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8a0a000 len:0x1000 00:06:09.191 [2024-11-21 01:31:52.936745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:09.191 passed 00:06:09.191 Test: blockdev nvme passthru rw ...passed 00:06:09.191 Test: blockdev nvme passthru vendor specific ...passed 00:06:09.191 Test: blockdev nvme admin passthru ...[2024-11-21 01:31:52.939019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:09.191 [2024-11-21 01:31:52.939057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:09.191 passed 00:06:09.191 Test: blockdev copy ...passed 00:06:09.191 Suite: bdevio tests on: Nvme2n3 00:06:09.191 Test: blockdev write read block ...passed 00:06:09.191 Test: blockdev write zeroes read block ...passed 00:06:09.191 Test: blockdev write zeroes read no split ...passed 00:06:09.191 Test: blockdev write zeroes read split ...passed 00:06:09.191 Test: blockdev write zeroes read split partial ...passed 00:06:09.191 Test: blockdev reset ...[2024-11-21 01:31:52.994838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:09.191 [2024-11-21 01:31:52.999071] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:09.191 Test: blockdev write read 8 blocks ...uccessful. 00:06:09.191 passed 00:06:09.191 Test: blockdev write read size > 128k ...passed 00:06:09.191 Test: blockdev write read invalid size ...passed 00:06:09.191 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:09.191 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:09.191 Test: blockdev write read max offset ...passed 00:06:09.191 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:09.191 Test: blockdev writev readv 8 blocks ...passed 00:06:09.191 Test: blockdev writev readv 30 x 1block ...passed 00:06:09.191 Test: blockdev writev readv block ...passed 00:06:09.191 Test: blockdev writev readv size > 128k ...passed 00:06:09.191 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:09.191 Test: blockdev comparev and writev ...[2024-11-21 01:31:53.016504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:06:09.191 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x29bc06000 len:0x1000 00:06:09.191 [2024-11-21 01:31:53.016633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:09.191 passed 00:06:09.191 Test: blockdev nvme passthru vendor specific ...passed 00:06:09.191 Test: blockdev nvme admin passthru ...[2024-11-21 01:31:53.018913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:09.191 [2024-11-21 01:31:53.018944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:09.191 passed 00:06:09.191 Test: blockdev copy ...passed 00:06:09.191 Suite: bdevio tests on: Nvme2n2 00:06:09.191 Test: blockdev write read block ...passed 00:06:09.191 Test: blockdev write zeroes read block ...passed 00:06:09.191 Test: blockdev write zeroes read no split ...passed 00:06:09.191 Test: blockdev write zeroes read split ...passed 00:06:09.191 Test: blockdev write zeroes read split partial ...passed 00:06:09.191 Test: blockdev reset ...[2024-11-21 01:31:53.076215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:09.191 passed 00:06:09.191 Test: blockdev write read 8 blocks ...[2024-11-21 01:31:53.080179] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:09.191 passed 00:06:09.191 Test: blockdev write read size > 128k ...passed 00:06:09.191 Test: blockdev write read invalid size ...passed 00:06:09.191 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:09.191 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:09.191 Test: blockdev write read max offset ...passed 00:06:09.191 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:09.191 Test: blockdev writev readv 8 blocks ...passed 00:06:09.191 Test: blockdev writev readv 30 x 1block ...passed 00:06:09.191 Test: blockdev writev readv block ...passed 00:06:09.191 Test: blockdev writev readv size > 128k ...passed 00:06:09.191 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:09.191 Test: blockdev comparev and writev ...[2024-11-21 01:31:53.097901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d423c000 len:0x1000 00:06:09.192 [2024-11-21 01:31:53.097937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:09.192 passed 00:06:09.192 Test: blockdev nvme passthru rw ...passed 00:06:09.192 Test: blockdev nvme passthru vendor specific ...[2024-11-21 01:31:53.100554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:06:09.192 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:06:09.192 [2024-11-21 01:31:53.100659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:09.192 passed 00:06:09.192 Test: blockdev copy ...passed 00:06:09.192 Suite: bdevio tests on: Nvme2n1 00:06:09.192 Test: blockdev write read block ...passed 00:06:09.192 Test: blockdev write zeroes read block ...passed 00:06:09.192 Test: blockdev write zeroes read no split ...passed 00:06:09.192 Test: blockdev write zeroes read split ...passed 00:06:09.450 Test: blockdev write zeroes read split partial ...passed 00:06:09.450 Test: blockdev reset ...[2024-11-21 01:31:53.157641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:09.450 [2024-11-21 01:31:53.161056] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:06:09.450 00:06:09.450 Test: blockdev write read 8 blocks ...passed 00:06:09.450 Test: blockdev write read size > 128k ...passed 00:06:09.450 Test: blockdev write read invalid size ...passed 00:06:09.450 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:09.450 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:09.450 Test: blockdev write read max offset ...passed 00:06:09.450 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:09.450 Test: blockdev writev readv 8 blocks ...passed 00:06:09.450 Test: blockdev writev readv 30 x 1block ...passed 00:06:09.450 Test: blockdev writev readv block ...passed 00:06:09.450 Test: blockdev writev readv size > 128k ...passed 00:06:09.450 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:09.450 Test: blockdev comparev and writev ...[2024-11-21 01:31:53.179220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4238000 len:0x1000 00:06:09.450 [2024-11-21 01:31:53.179256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:09.450 passed 00:06:09.450 Test: blockdev nvme passthru rw ...passed 00:06:09.450 Test: blockdev nvme passthru vendor specific ...[2024-11-21 01:31:53.181743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:09.450 [2024-11-21 01:31:53.181773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:09.450 passed 00:06:09.450 Test: blockdev nvme admin passthru ...passed 00:06:09.450 Test: blockdev copy ...passed 00:06:09.450 Suite: bdevio tests on: Nvme1n1 00:06:09.450 Test: blockdev write read block ...passed 00:06:09.450 Test: blockdev write zeroes read block ...passed 00:06:09.450 Test: blockdev write zeroes read no split ...passed 00:06:09.450 Test: blockdev write zeroes read split ...passed 00:06:09.450 Test: blockdev write zeroes read split partial ...passed 00:06:09.450 Test: blockdev reset ...[2024-11-21 01:31:53.239092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:09.450 passed 00:06:09.450 Test: blockdev write read 8 blocks ...[2024-11-21 01:31:53.242549] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:09.450 passed 00:06:09.450 Test: blockdev write read size > 128k ...passed 00:06:09.450 Test: blockdev write read invalid size ...passed 00:06:09.450 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:09.450 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:09.450 Test: blockdev write read max offset ...passed 00:06:09.450 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:09.450 Test: blockdev writev readv 8 blocks ...passed 00:06:09.450 Test: blockdev writev readv 30 x 1block ...passed 00:06:09.450 Test: blockdev writev readv block ...passed 00:06:09.450 Test: blockdev writev readv size > 128k ...passed 00:06:09.450 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:09.450 Test: blockdev comparev and writev ...[2024-11-21 01:31:53.261529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4234000 len:0x1000 00:06:09.450 [2024-11-21 01:31:53.261567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:09.450 passed 00:06:09.450 Test: blockdev nvme passthru rw ...passed 00:06:09.450 Test: blockdev nvme passthru vendor specific ...[2024-11-21 01:31:53.264383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:09.450 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:09.450 [2024-11-21 01:31:53.264478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:09.450 passed 00:06:09.450 Test: blockdev copy ...passed 00:06:09.450 Suite: bdevio tests on: Nvme0n1 00:06:09.450 Test: blockdev write read block ...passed 00:06:09.450 Test: blockdev write zeroes read block ...passed 00:06:09.450 Test: blockdev write zeroes read no split ...passed 00:06:09.450 Test: blockdev write zeroes read split ...passed 00:06:09.450 Test: blockdev write zeroes read split partial ...passed 00:06:09.450 Test: blockdev reset ...[2024-11-21 01:31:53.322512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:09.450 [2024-11-21 01:31:53.326746] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:06:09.451 Test: blockdev write read 8 blocks ...uccessful. 00:06:09.451 passed 00:06:09.451 Test: blockdev write read size > 128k ...passed 00:06:09.451 Test: blockdev write read invalid size ...passed 00:06:09.451 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:09.451 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:09.451 Test: blockdev write read max offset ...passed 00:06:09.451 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:09.451 Test: blockdev writev readv 8 blocks ...passed 00:06:09.451 Test: blockdev writev readv 30 x 1block ...passed 00:06:09.451 Test: blockdev writev readv block ...passed 00:06:09.451 Test: blockdev writev readv size > 128k ...passed 00:06:09.451 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:09.451 Test: blockdev comparev and writev ...passed 00:06:09.451 Test: blockdev nvme passthru rw ...[2024-11-21 01:31:53.342803] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:09.451 separate metadata which is not supported yet. 00:06:09.451 passed 00:06:09.451 Test: blockdev nvme passthru vendor specific ...passed 00:06:09.451 Test: blockdev nvme admin passthru ...[2024-11-21 01:31:53.344200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:09.451 [2024-11-21 01:31:53.344240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:09.451 passed 00:06:09.451 Test: blockdev copy ...passed 00:06:09.451 00:06:09.451 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.451 suites 6 6 n/a 0 0 00:06:09.451 tests 138 138 138 0 0 00:06:09.451 asserts 893 893 893 0 n/a 00:06:09.451 00:06:09.451 Elapsed time = 1.230 seconds 00:06:09.451 0 00:06:09.451 01:31:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59854 00:06:09.451 01:31:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59854 ']' 00:06:09.451 01:31:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59854 00:06:09.451 01:31:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:09.451 01:31:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.451 01:31:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59854 00:06:09.451 01:31:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.451 01:31:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.451 01:31:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59854' 00:06:09.451 killing process with pid 59854 00:06:09.709 01:31:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59854 00:06:09.709 01:31:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59854 00:06:10.276 01:31:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:10.276 00:06:10.276 real 0m2.243s 00:06:10.276 user 0m5.744s 00:06:10.276 sys 0m0.274s 00:06:10.276 01:31:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.276 01:31:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:10.276 ************************************ 00:06:10.276 END TEST bdev_bounds 00:06:10.276 ************************************ 00:06:10.276 01:31:54 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:10.276 01:31:54 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:10.276 01:31:54 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.276 01:31:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.276 ************************************ 00:06:10.276 START TEST bdev_nbd 00:06:10.276 ************************************ 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59908 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59908 /var/tmp/spdk-nbd.sock 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59908 ']' 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.276 01:31:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:10.276 [2024-11-21 01:31:54.199577] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:10.276 [2024-11-21 01:31:54.199848] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:10.533 [2024-11-21 01:31:54.359261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.533 [2024-11-21 01:31:54.456932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.100 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.100 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:11.100 01:31:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:11.100 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.100 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:11.100 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:11.100 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:11.100 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.100 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:11.100 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:11.100 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:11.100 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:11.100 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:11.100 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:11.100 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:11.667 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:11.667 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:11.667 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:11.667 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:11.667 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:11.667 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:11.667 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:11.668 1+0 records in 00:06:11.668 1+0 records out 00:06:11.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000998304 s, 4.1 MB/s 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:11.668 1+0 records in 00:06:11.668 1+0 records out 00:06:11.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113125 s, 3.6 MB/s 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:11.668 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:11.926 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:11.926 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:11.926 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:11.926 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:11.926 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:11.926 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:11.926 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:11.926 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:11.926 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:11.926 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:11.926 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:11.926 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:11.926 1+0 records in 00:06:11.926 1+0 records out 00:06:11.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110372 s, 3.7 MB/s 00:06:11.926 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.926 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:11.926 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:11.926 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:11.926 01:31:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:11.926 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:11.926 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:11.926 01:31:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:12.184 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:12.184 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:12.184 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:12.184 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:12.184 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:12.184 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.184 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.184 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:12.184 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:12.184 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.184 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.184 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.184 1+0 records in 00:06:12.184 1+0 records out 00:06:12.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00145084 s, 2.8 MB/s 00:06:12.184 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.184 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:12.184 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.184 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.184 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:12.184 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:12.184 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:12.184 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:12.443 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:12.443 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:12.443 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:12.443 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:12.443 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:12.443 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.443 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.443 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:12.443 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:12.443 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.443 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.443 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.443 1+0 records in 00:06:12.443 1+0 records out 00:06:12.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000799673 s, 5.1 MB/s 00:06:12.443 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.443 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:12.443 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.443 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.443 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:12.443 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:12.443 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:12.443 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:12.702 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:12.702 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:12.702 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:12.702 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:12.702 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:12.702 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:12.702 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:12.702 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:12.702 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:12.702 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:12.702 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:12.702 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:12.702 1+0 records in 00:06:12.702 1+0 records out 00:06:12.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110298 s, 3.7 MB/s 00:06:12.702 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.702 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:12.702 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:12.702 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:12.702 01:31:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:12.702 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:12.702 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:12.702 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.960 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:12.960 { 00:06:12.960 "nbd_device": "/dev/nbd0", 00:06:12.960 "bdev_name": "Nvme0n1" 00:06:12.960 }, 00:06:12.960 { 00:06:12.960 "nbd_device": "/dev/nbd1", 00:06:12.960 "bdev_name": "Nvme1n1" 00:06:12.960 }, 00:06:12.960 { 00:06:12.960 "nbd_device": "/dev/nbd2", 00:06:12.960 "bdev_name": "Nvme2n1" 00:06:12.960 }, 00:06:12.960 { 00:06:12.960 "nbd_device": "/dev/nbd3", 00:06:12.960 "bdev_name": "Nvme2n2" 00:06:12.960 }, 00:06:12.960 { 00:06:12.960 "nbd_device": "/dev/nbd4", 00:06:12.960 "bdev_name": "Nvme2n3" 00:06:12.960 }, 00:06:12.960 { 00:06:12.960 "nbd_device": "/dev/nbd5", 00:06:12.960 "bdev_name": "Nvme3n1" 00:06:12.960 } 00:06:12.961 ]' 00:06:12.961 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:12.961 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:12.961 { 00:06:12.961 "nbd_device": "/dev/nbd0", 00:06:12.961 "bdev_name": "Nvme0n1" 00:06:12.961 }, 00:06:12.961 { 00:06:12.961 "nbd_device": "/dev/nbd1", 00:06:12.961 "bdev_name": "Nvme1n1" 00:06:12.961 }, 00:06:12.961 { 00:06:12.961 "nbd_device": "/dev/nbd2", 00:06:12.961 "bdev_name": "Nvme2n1" 00:06:12.961 }, 00:06:12.961 { 00:06:12.961 "nbd_device": "/dev/nbd3", 00:06:12.961 "bdev_name": "Nvme2n2" 00:06:12.961 }, 00:06:12.961 { 00:06:12.961 "nbd_device": "/dev/nbd4", 00:06:12.961 "bdev_name": "Nvme2n3" 00:06:12.961 }, 00:06:12.961 { 00:06:12.961 "nbd_device": "/dev/nbd5", 00:06:12.961 "bdev_name": "Nvme3n1" 00:06:12.961 } 00:06:12.961 ]' 00:06:12.961 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:12.961 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:12.961 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.961 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:12.961 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.961 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:12.961 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.961 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.219 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.219 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.219 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.219 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.219 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.219 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.219 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.219 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.219 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.219 01:31:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.480 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.480 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.480 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.480 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.480 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.480 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.480 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.480 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.480 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.480 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:13.480 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:13.480 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:13.480 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:13.480 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.480 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.480 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:13.739 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:13.739 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.739 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.739 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:14.000 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:14.000 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:14.000 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:14.000 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.000 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.000 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:14.000 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.000 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.000 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.000 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:14.000 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:14.000 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:14.000 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:14.000 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.000 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.000 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:14.000 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.000 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.000 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.000 01:31:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:14.260 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:14.260 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:14.260 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:14.260 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.260 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.260 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:14.260 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:14.260 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.260 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.260 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.260 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:14.522 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:14.784 /dev/nbd0 00:06:14.784 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.784 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.785 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:14.785 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:14.785 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.785 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.785 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:14.785 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:14.785 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.785 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.785 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.785 1+0 records in 00:06:14.785 1+0 records out 00:06:14.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297826 s, 13.8 MB/s 00:06:14.785 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.785 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:14.785 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.785 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.785 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:14.785 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.785 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:14.785 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:15.046 /dev/nbd1 00:06:15.046 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.046 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.046 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:15.046 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.046 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.046 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.046 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:15.046 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.046 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.046 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.046 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.046 1+0 records in 00:06:15.046 1+0 records out 00:06:15.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511521 s, 8.0 MB/s 00:06:15.046 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.046 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.046 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.046 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.046 01:31:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.046 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.046 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:15.046 01:31:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:15.305 /dev/nbd10 00:06:15.305 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:15.305 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:15.305 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:15.305 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.305 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.305 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.305 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:15.305 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.305 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.305 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.305 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.305 1+0 records in 00:06:15.305 1+0 records out 00:06:15.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395322 s, 10.4 MB/s 00:06:15.305 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.305 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.305 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.306 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.306 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.306 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.306 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:15.306 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:15.564 /dev/nbd11 00:06:15.564 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:15.564 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:15.564 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:15.564 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.564 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.564 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.564 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:15.564 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.564 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.564 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.564 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.564 1+0 records in 00:06:15.564 1+0 records out 00:06:15.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388713 s, 10.5 MB/s 00:06:15.564 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.564 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.564 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.564 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.564 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.564 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.564 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:15.564 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:15.823 /dev/nbd12 00:06:15.823 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:15.823 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:15.823 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:15.823 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.823 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.823 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.823 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:15.823 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.823 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.823 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.823 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.823 1+0 records in 00:06:15.823 1+0 records out 00:06:15.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430924 s, 9.5 MB/s 00:06:15.823 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.823 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.823 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.823 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.823 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.823 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.823 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:15.823 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:16.081 /dev/nbd13 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.081 1+0 records in 00:06:16.081 1+0 records out 00:06:16.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000978098 s, 4.2 MB/s 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.081 01:31:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.341 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.341 { 00:06:16.341 "nbd_device": "/dev/nbd0", 00:06:16.341 "bdev_name": "Nvme0n1" 00:06:16.341 }, 00:06:16.341 { 00:06:16.341 "nbd_device": "/dev/nbd1", 00:06:16.341 "bdev_name": "Nvme1n1" 00:06:16.341 }, 00:06:16.341 { 00:06:16.341 "nbd_device": "/dev/nbd10", 00:06:16.341 "bdev_name": "Nvme2n1" 00:06:16.341 }, 00:06:16.341 { 00:06:16.341 "nbd_device": "/dev/nbd11", 00:06:16.341 "bdev_name": "Nvme2n2" 00:06:16.341 }, 00:06:16.341 { 00:06:16.341 "nbd_device": "/dev/nbd12", 00:06:16.341 "bdev_name": "Nvme2n3" 00:06:16.341 }, 00:06:16.341 { 00:06:16.341 "nbd_device": "/dev/nbd13", 00:06:16.341 "bdev_name": "Nvme3n1" 00:06:16.341 } 00:06:16.341 ]' 00:06:16.341 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.341 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.341 { 00:06:16.341 "nbd_device": "/dev/nbd0", 00:06:16.341 "bdev_name": "Nvme0n1" 00:06:16.341 }, 00:06:16.341 { 00:06:16.341 "nbd_device": "/dev/nbd1", 00:06:16.341 "bdev_name": "Nvme1n1" 00:06:16.342 }, 00:06:16.342 { 00:06:16.342 "nbd_device": "/dev/nbd10", 00:06:16.342 "bdev_name": "Nvme2n1" 00:06:16.342 }, 00:06:16.342 { 00:06:16.342 "nbd_device": "/dev/nbd11", 00:06:16.342 "bdev_name": "Nvme2n2" 00:06:16.342 }, 00:06:16.342 { 00:06:16.342 "nbd_device": "/dev/nbd12", 00:06:16.342 "bdev_name": "Nvme2n3" 00:06:16.342 }, 00:06:16.342 { 00:06:16.342 "nbd_device": "/dev/nbd13", 00:06:16.342 "bdev_name": "Nvme3n1" 00:06:16.342 } 00:06:16.342 ]' 00:06:16.342 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.342 /dev/nbd1 00:06:16.342 /dev/nbd10 00:06:16.342 /dev/nbd11 00:06:16.342 /dev/nbd12 00:06:16.342 /dev/nbd13' 00:06:16.342 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.342 /dev/nbd1 00:06:16.342 /dev/nbd10 00:06:16.342 /dev/nbd11 00:06:16.342 /dev/nbd12 00:06:16.342 /dev/nbd13' 00:06:16.342 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.342 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:16.342 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:16.342 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:16.342 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:16.342 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:16.342 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:16.342 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.342 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.342 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:16.342 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.342 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:16.342 256+0 records in 00:06:16.342 256+0 records out 00:06:16.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00823965 s, 127 MB/s 00:06:16.342 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.342 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.342 256+0 records in 00:06:16.342 256+0 records out 00:06:16.342 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124177 s, 8.4 MB/s 00:06:16.342 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.342 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.601 256+0 records in 00:06:16.601 256+0 records out 00:06:16.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.198165 s, 5.3 MB/s 00:06:16.601 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.601 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:16.860 256+0 records in 00:06:16.860 256+0 records out 00:06:16.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.228161 s, 4.6 MB/s 00:06:16.860 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.860 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:17.118 256+0 records in 00:06:17.118 256+0 records out 00:06:17.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.225151 s, 4.7 MB/s 00:06:17.118 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.118 01:32:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:17.375 256+0 records in 00:06:17.375 256+0 records out 00:06:17.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.228135 s, 4.6 MB/s 00:06:17.375 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.375 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:17.633 256+0 records in 00:06:17.633 256+0 records out 00:06:17.633 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.218095 s, 4.8 MB/s 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:17.633 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:17.634 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.634 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:17.634 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.634 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:17.634 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.634 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.892 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.892 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.892 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.892 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.892 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.892 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.892 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.892 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.892 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.892 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.892 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.892 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.892 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.892 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.892 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.892 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:18.151 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.151 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.151 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.151 01:32:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:18.151 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:18.151 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:18.151 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:18.151 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.151 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.151 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:18.151 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.151 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.151 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.151 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:18.411 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:18.411 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:18.411 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:18.411 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.411 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.411 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:18.411 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.411 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.411 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.411 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:18.669 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:18.669 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:18.669 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:18.669 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.669 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.669 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:18.670 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.670 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.670 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.670 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:18.928 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:18.928 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:18.928 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:18.928 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.928 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.928 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:18.928 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:18.928 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.928 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.928 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.928 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.928 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.189 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.189 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.189 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.189 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.189 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.189 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:19.189 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.189 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.189 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.189 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.189 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.189 01:32:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:19.189 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.189 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:19.189 01:32:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:19.189 malloc_lvol_verify 00:06:19.189 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:19.450 c86c7189-ebd8-404e-beae-c73293bc4cfc 00:06:19.450 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:19.708 6423225f-a912-4749-b4b5-65d31c45b5a7 00:06:19.708 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:19.967 /dev/nbd0 00:06:19.967 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:19.967 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:19.967 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:19.967 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:19.967 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:19.967 mke2fs 1.47.0 (5-Feb-2023) 00:06:19.967 Discarding device blocks: 0/4096 done 00:06:19.967 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:19.967 00:06:19.967 Allocating group tables: 0/1 done 00:06:19.967 Writing inode tables: 0/1 done 00:06:19.967 Creating journal (1024 blocks): done 00:06:19.967 Writing superblocks and filesystem accounting information: 0/1 done 00:06:19.967 00:06:19.967 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:19.967 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.967 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:19.967 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.967 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:19.967 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.967 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.225 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.225 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.225 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.225 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.225 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.225 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.225 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.225 01:32:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.225 01:32:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59908 00:06:20.225 01:32:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59908 ']' 00:06:20.225 01:32:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59908 00:06:20.225 01:32:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:20.225 01:32:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.225 01:32:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59908 00:06:20.225 01:32:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.225 killing process with pid 59908 00:06:20.225 01:32:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.225 01:32:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59908' 00:06:20.225 01:32:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59908 00:06:20.225 01:32:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59908 00:06:20.792 01:32:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:20.792 00:06:20.792 real 0m10.459s 00:06:20.792 user 0m14.549s 00:06:20.792 sys 0m3.305s 00:06:20.792 01:32:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.792 ************************************ 00:06:20.792 END TEST bdev_nbd 00:06:20.792 ************************************ 00:06:20.792 01:32:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:20.792 01:32:04 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:20.792 01:32:04 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:20.792 skipping fio tests on NVMe due to multi-ns failures. 00:06:20.792 01:32:04 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:20.792 01:32:04 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:20.792 01:32:04 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:20.792 01:32:04 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:20.792 01:32:04 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.792 01:32:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:20.792 ************************************ 00:06:20.792 START TEST bdev_verify 00:06:20.792 ************************************ 00:06:20.792 01:32:04 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:20.792 [2024-11-21 01:32:04.694367] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:20.792 [2024-11-21 01:32:04.694463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60293 ] 00:06:21.051 [2024-11-21 01:32:04.843520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.051 [2024-11-21 01:32:04.922782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.051 [2024-11-21 01:32:04.922934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.618 Running I/O for 5 seconds... 00:06:23.706 20096.00 IOPS, 78.50 MiB/s [2024-11-21T01:32:09.042Z] 21760.00 IOPS, 85.00 MiB/s [2024-11-21T01:32:09.611Z] 21440.00 IOPS, 83.75 MiB/s [2024-11-21T01:32:10.996Z] 22224.00 IOPS, 86.81 MiB/s [2024-11-21T01:32:10.996Z] 22515.20 IOPS, 87.95 MiB/s 00:06:27.039 Latency(us) 00:06:27.039 [2024-11-21T01:32:10.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:27.039 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:27.039 Verification LBA range: start 0x0 length 0xbd0bd 00:06:27.039 Nvme0n1 : 5.03 1906.73 7.45 0.00 0.00 66888.97 11897.30 96791.63 00:06:27.039 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:27.039 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:27.039 Nvme0n1 : 5.07 1793.59 7.01 0.00 0.00 71166.14 13208.02 104857.60 00:06:27.039 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:27.039 Verification LBA range: start 0x0 length 0xa0000 00:06:27.039 Nvme1n1 : 5.06 1909.10 7.46 0.00 0.00 66618.27 8368.44 88725.66 00:06:27.039 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:27.039 Verification LBA range: start 0xa0000 length 0xa0000 00:06:27.039 Nvme1n1 : 5.07 1792.54 7.00 0.00 0.00 71049.55 14619.57 95581.74 00:06:27.039 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:27.039 Verification LBA range: start 0x0 length 0x80000 00:06:27.039 Nvme2n1 : 5.08 1915.69 7.48 0.00 0.00 66306.27 13409.67 74610.22 00:06:27.039 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:27.039 Verification LBA range: start 0x80000 length 0x80000 00:06:27.039 Nvme2n1 : 5.07 1792.07 7.00 0.00 0.00 70711.51 16333.59 75416.81 00:06:27.039 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:27.039 Verification LBA range: start 0x0 length 0x80000 00:06:27.039 Nvme2n2 : 5.08 1915.19 7.48 0.00 0.00 66165.82 13409.67 70173.93 00:06:27.039 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:27.039 Verification LBA range: start 0x80000 length 0x80000 00:06:27.039 Nvme2n2 : 5.07 1791.53 7.00 0.00 0.00 70570.18 17543.48 62914.56 00:06:27.039 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:27.039 Verification LBA range: start 0x0 length 0x80000 00:06:27.039 Nvme2n3 : 5.08 1914.63 7.48 0.00 0.00 65957.52 13712.15 64124.46 00:06:27.039 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:27.039 Verification LBA range: start 0x80000 length 0x80000 00:06:27.039 Nvme2n3 : 5.09 1799.50 7.03 0.00 0.00 70106.64 3982.57 63317.86 00:06:27.039 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:27.039 Verification LBA range: start 0x0 length 0x20000 00:06:27.039 Nvme3n1 : 5.09 1923.47 7.51 0.00 0.00 65512.80 2986.93 64931.05 00:06:27.039 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:27.039 Verification LBA range: start 0x20000 length 0x20000 00:06:27.039 Nvme3n1 : 5.11 1803.07 7.04 0.00 0.00 69827.63 9023.80 66140.95 00:06:27.039 [2024-11-21T01:32:10.996Z] =================================================================================================================== 00:06:27.039 [2024-11-21T01:32:10.996Z] Total : 22257.09 86.94 0.00 0.00 68336.84 2986.93 104857.60 00:06:28.424 00:06:28.424 real 0m7.345s 00:06:28.424 user 0m13.773s 00:06:28.424 sys 0m0.214s 00:06:28.424 ************************************ 00:06:28.424 END TEST bdev_verify 00:06:28.424 ************************************ 00:06:28.424 01:32:11 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.424 01:32:11 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:28.424 01:32:12 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:28.424 01:32:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:28.424 01:32:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.424 01:32:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:28.424 ************************************ 00:06:28.424 START TEST bdev_verify_big_io 00:06:28.424 ************************************ 00:06:28.424 01:32:12 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:28.424 [2024-11-21 01:32:12.126921] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:28.424 [2024-11-21 01:32:12.127086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60385 ] 00:06:28.424 [2024-11-21 01:32:12.288656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.685 [2024-11-21 01:32:12.425453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.685 [2024-11-21 01:32:12.425454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.258 Running I/O for 5 seconds... 00:06:32.114 0.00 IOPS, 0.00 MiB/s [2024-11-21T01:32:17.457Z] 853.50 IOPS, 53.34 MiB/s [2024-11-21T01:32:19.363Z] 1208.67 IOPS, 75.54 MiB/s [2024-11-21T01:32:19.363Z] 1504.50 IOPS, 94.03 MiB/s 00:06:35.406 Latency(us) 00:06:35.406 [2024-11-21T01:32:19.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:35.406 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:35.406 Verification LBA range: start 0x0 length 0xbd0b 00:06:35.406 Nvme0n1 : 5.63 113.75 7.11 0.00 0.00 1072838.18 31255.63 1148594.02 00:06:35.406 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:35.406 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:35.406 Nvme0n1 : 5.72 116.31 7.27 0.00 0.00 1047264.67 26012.75 1155046.79 00:06:35.406 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:35.406 Verification LBA range: start 0x0 length 0xa000 00:06:35.406 Nvme1n1 : 5.92 118.91 7.43 0.00 0.00 1005362.91 54445.29 961463.53 00:06:35.406 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:35.406 Verification LBA range: start 0xa000 length 0xa000 00:06:35.406 Nvme1n1 : 5.72 116.10 7.26 0.00 0.00 1010212.84 118569.75 987274.63 00:06:35.406 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:35.406 Verification LBA range: start 0x0 length 0x8000 00:06:35.406 Nvme2n1 : 6.01 124.04 7.75 0.00 0.00 938263.06 38111.70 980821.86 00:06:35.406 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:35.406 Verification LBA range: start 0x8000 length 0x8000 00:06:35.406 Nvme2n1 : 5.92 125.39 7.84 0.00 0.00 920071.63 59284.87 1006632.96 00:06:35.406 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:35.406 Verification LBA range: start 0x0 length 0x8000 00:06:35.406 Nvme2n2 : 6.02 123.68 7.73 0.00 0.00 907336.43 38111.70 1006632.96 00:06:35.406 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:35.406 Verification LBA range: start 0x8000 length 0x8000 00:06:35.406 Nvme2n2 : 5.92 129.74 8.11 0.00 0.00 865111.70 49404.06 1025991.29 00:06:35.406 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:35.406 Verification LBA range: start 0x0 length 0x8000 00:06:35.406 Nvme2n3 : 6.02 127.63 7.98 0.00 0.00 853330.71 53235.40 1025991.29 00:06:35.406 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:35.406 Verification LBA range: start 0x8000 length 0x8000 00:06:35.406 Nvme2n3 : 5.96 132.25 8.27 0.00 0.00 816617.97 36700.16 1045349.61 00:06:35.406 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:35.406 Verification LBA range: start 0x0 length 0x2000 00:06:35.406 Nvme3n1 : 6.02 138.10 8.63 0.00 0.00 763930.09 1852.65 1045349.61 00:06:35.406 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:35.406 Verification LBA range: start 0x2000 length 0x2000 00:06:35.406 Nvme3n1 : 6.02 153.08 9.57 0.00 0.00 685401.90 526.18 1064707.94 00:06:35.406 [2024-11-21T01:32:19.363Z] =================================================================================================================== 00:06:35.406 [2024-11-21T01:32:19.363Z] Total : 1518.97 94.94 0.00 0.00 896170.53 526.18 1155046.79 00:06:36.780 00:06:36.780 real 0m8.539s 00:06:36.780 user 0m16.011s 00:06:36.780 sys 0m0.301s 00:06:36.780 ************************************ 00:06:36.780 END TEST bdev_verify_big_io 00:06:36.780 ************************************ 00:06:36.780 01:32:20 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.780 01:32:20 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:36.780 01:32:20 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:36.780 01:32:20 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:36.780 01:32:20 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.780 01:32:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:36.780 ************************************ 00:06:36.780 START TEST bdev_write_zeroes 00:06:36.780 ************************************ 00:06:36.780 01:32:20 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:36.780 [2024-11-21 01:32:20.716232] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:36.780 [2024-11-21 01:32:20.716350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60500 ] 00:06:37.039 [2024-11-21 01:32:20.875788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.039 [2024-11-21 01:32:20.971694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.606 Running I/O for 1 seconds... 00:06:39.015 69120.00 IOPS, 270.00 MiB/s 00:06:39.015 Latency(us) 00:06:39.015 [2024-11-21T01:32:22.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:39.015 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:39.015 Nvme0n1 : 1.02 11462.55 44.78 0.00 0.00 11126.60 5721.80 20265.75 00:06:39.015 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:39.015 Nvme1n1 : 1.02 11435.95 44.67 0.00 0.00 11134.25 8469.27 20265.75 00:06:39.015 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:39.015 Nvme2n1 : 1.03 11425.62 44.63 0.00 0.00 11097.90 7208.96 20568.22 00:06:39.015 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:39.015 Nvme2n2 : 1.03 11402.76 44.54 0.00 0.00 11083.24 7057.72 20467.40 00:06:39.015 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:39.015 Nvme2n3 : 1.03 11389.86 44.49 0.00 0.00 11072.32 6074.68 20568.22 00:06:39.015 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:39.015 Nvme3n1 : 1.03 11377.06 44.44 0.00 0.00 11070.38 5772.21 20366.57 00:06:39.015 [2024-11-21T01:32:22.972Z] =================================================================================================================== 00:06:39.015 [2024-11-21T01:32:22.972Z] Total : 68493.80 267.55 0.00 0.00 11097.45 5721.80 20568.22 00:06:39.594 00:06:39.594 real 0m2.645s 00:06:39.594 user 0m2.362s 00:06:39.594 sys 0m0.170s 00:06:39.594 ************************************ 00:06:39.594 END TEST bdev_write_zeroes 00:06:39.594 ************************************ 00:06:39.594 01:32:23 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.594 01:32:23 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:39.594 01:32:23 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:39.594 01:32:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:39.594 01:32:23 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.594 01:32:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:39.594 ************************************ 00:06:39.594 START TEST bdev_json_nonenclosed 00:06:39.594 ************************************ 00:06:39.594 01:32:23 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:39.594 [2024-11-21 01:32:23.422296] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:39.594 [2024-11-21 01:32:23.422411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60547 ] 00:06:39.854 [2024-11-21 01:32:23.582718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.854 [2024-11-21 01:32:23.678169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.854 [2024-11-21 01:32:23.678250] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:39.854 [2024-11-21 01:32:23.678266] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:39.854 [2024-11-21 01:32:23.678275] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.113 00:06:40.113 real 0m0.495s 00:06:40.113 user 0m0.298s 00:06:40.113 sys 0m0.092s 00:06:40.113 01:32:23 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.113 01:32:23 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:40.113 ************************************ 00:06:40.113 END TEST bdev_json_nonenclosed 00:06:40.113 ************************************ 00:06:40.113 01:32:23 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:40.113 01:32:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:40.113 01:32:23 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.113 01:32:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:40.113 ************************************ 00:06:40.113 START TEST bdev_json_nonarray 00:06:40.113 ************************************ 00:06:40.113 01:32:23 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:40.113 [2024-11-21 01:32:23.979869] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:40.114 [2024-11-21 01:32:23.979986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60573 ] 00:06:40.371 [2024-11-21 01:32:24.138058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.371 [2024-11-21 01:32:24.235413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.371 [2024-11-21 01:32:24.235516] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:40.371 [2024-11-21 01:32:24.235540] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:40.371 [2024-11-21 01:32:24.235553] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.629 00:06:40.629 real 0m0.494s 00:06:40.629 user 0m0.303s 00:06:40.629 sys 0m0.088s 00:06:40.629 ************************************ 00:06:40.629 END TEST bdev_json_nonarray 00:06:40.629 ************************************ 00:06:40.629 01:32:24 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.629 01:32:24 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:40.629 01:32:24 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:40.629 01:32:24 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:40.629 01:32:24 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:40.629 01:32:24 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:40.629 01:32:24 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:40.629 01:32:24 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:40.629 01:32:24 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:40.629 01:32:24 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:40.629 01:32:24 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:40.629 01:32:24 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:40.629 01:32:24 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:40.629 00:06:40.629 real 0m37.331s 00:06:40.629 user 0m57.517s 00:06:40.630 sys 0m5.311s 00:06:40.630 ************************************ 00:06:40.630 END TEST blockdev_nvme 00:06:40.630 ************************************ 00:06:40.630 01:32:24 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.630 01:32:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:40.630 01:32:24 -- spdk/autotest.sh@209 -- # uname -s 00:06:40.630 01:32:24 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:40.630 01:32:24 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:40.630 01:32:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:40.630 01:32:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.630 01:32:24 -- common/autotest_common.sh@10 -- # set +x 00:06:40.630 ************************************ 00:06:40.630 START TEST blockdev_nvme_gpt 00:06:40.630 ************************************ 00:06:40.630 01:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:40.630 * Looking for test storage... 00:06:40.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:40.888 01:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.888 01:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.888 01:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.888 01:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.888 01:32:24 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:40.888 01:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.888 01:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.888 --rc genhtml_branch_coverage=1 00:06:40.888 --rc genhtml_function_coverage=1 00:06:40.888 --rc genhtml_legend=1 00:06:40.888 --rc geninfo_all_blocks=1 00:06:40.888 --rc geninfo_unexecuted_blocks=1 00:06:40.888 00:06:40.888 ' 00:06:40.888 01:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.888 --rc genhtml_branch_coverage=1 00:06:40.888 --rc genhtml_function_coverage=1 00:06:40.888 --rc genhtml_legend=1 00:06:40.888 --rc geninfo_all_blocks=1 00:06:40.888 --rc geninfo_unexecuted_blocks=1 00:06:40.888 00:06:40.888 ' 00:06:40.888 01:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.888 --rc genhtml_branch_coverage=1 00:06:40.888 --rc genhtml_function_coverage=1 00:06:40.888 --rc genhtml_legend=1 00:06:40.888 --rc geninfo_all_blocks=1 00:06:40.888 --rc geninfo_unexecuted_blocks=1 00:06:40.888 00:06:40.888 ' 00:06:40.888 01:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.888 --rc genhtml_branch_coverage=1 00:06:40.888 --rc genhtml_function_coverage=1 00:06:40.888 --rc genhtml_legend=1 00:06:40.888 --rc geninfo_all_blocks=1 00:06:40.888 --rc geninfo_unexecuted_blocks=1 00:06:40.888 00:06:40.888 ' 00:06:40.888 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:40.888 01:32:24 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:40.888 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:40.888 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:40.888 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60659 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60659 00:06:40.889 01:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60659 ']' 00:06:40.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.889 01:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.889 01:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.889 01:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.889 01:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.889 01:32:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:40.889 01:32:24 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:40.889 [2024-11-21 01:32:24.745179] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:40.889 [2024-11-21 01:32:24.745299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60659 ] 00:06:41.147 [2024-11-21 01:32:24.900493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.147 [2024-11-21 01:32:24.995045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.715 01:32:25 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.715 01:32:25 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:41.715 01:32:25 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:41.715 01:32:25 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:41.715 01:32:25 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:41.973 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:42.231 Waiting for block devices as requested 00:06:42.231 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:42.231 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:42.488 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:42.488 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:47.752 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:47.752 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:47.752 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:47.752 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:47.752 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:47.752 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.752 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:47.752 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:47.752 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:47.752 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.752 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.752 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:47.752 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:47.752 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:47.753 01:32:31 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:47.753 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:47.753 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:47.753 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:47.753 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:47.753 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:47.753 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:47.753 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:47.753 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:47.753 BYT; 00:06:47.753 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:47.753 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:47.753 BYT; 00:06:47.753 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:47.753 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:47.753 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:47.753 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:47.753 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:47.753 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:47.753 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:47.753 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:47.753 01:32:31 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:47.753 01:32:31 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:47.753 01:32:31 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:47.753 01:32:31 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:47.753 01:32:31 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:47.753 01:32:31 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:47.753 01:32:31 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:47.753 01:32:31 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:47.753 01:32:31 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:47.753 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:47.753 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:47.753 01:32:31 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:47.753 01:32:31 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:47.753 01:32:31 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:47.753 01:32:31 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:47.753 01:32:31 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:47.753 01:32:31 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:47.753 01:32:31 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:47.753 01:32:31 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:47.753 01:32:31 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:47.753 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:47.753 01:32:31 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:48.688 The operation has completed successfully. 00:06:48.688 01:32:32 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:49.696 The operation has completed successfully. 00:06:49.696 01:32:33 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:50.263 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:50.520 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:50.520 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:50.520 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:50.520 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:50.778 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:50.778 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.778 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:50.778 [] 00:06:50.778 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.778 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:50.778 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:50.778 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:50.778 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:50.778 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:50.778 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.778 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.037 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.037 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:51.037 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.037 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.037 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.037 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:51.037 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:51.037 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.037 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.037 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.037 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:51.037 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.037 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.037 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.037 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:51.037 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.037 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.037 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.037 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:51.037 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:51.038 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:51.038 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.038 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.038 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.038 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:51.038 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:51.038 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "bb848764-e3c6-4f16-b3dd-e32c8bbc71be"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "bb848764-e3c6-4f16-b3dd-e32c8bbc71be",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "36464321-34fb-4571-adae-77e9e0b049c0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "36464321-34fb-4571-adae-77e9e0b049c0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "c95e0ca7-5e3a-424d-8aff-f1504e132d7a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c95e0ca7-5e3a-424d-8aff-f1504e132d7a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "63fa5fae-ffe5-4e28-83eb-38cfb0101afa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "63fa5fae-ffe5-4e28-83eb-38cfb0101afa",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "2d33e091-0b47-4602-adb4-3aa12a305cc9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2d33e091-0b47-4602-adb4-3aa12a305cc9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:51.038 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:51.038 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:51.038 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:51.038 01:32:34 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60659 00:06:51.038 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60659 ']' 00:06:51.038 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60659 00:06:51.038 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:51.038 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.038 01:32:34 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60659 00:06:51.297 killing process with pid 60659 00:06:51.297 01:32:35 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.297 01:32:35 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.297 01:32:35 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60659' 00:06:51.297 01:32:35 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60659 00:06:51.297 01:32:35 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60659 00:06:52.670 01:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:52.670 01:32:36 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:52.670 01:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:52.670 01:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.670 01:32:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:52.670 ************************************ 00:06:52.670 START TEST bdev_hello_world 00:06:52.670 ************************************ 00:06:52.670 01:32:36 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:52.670 [2024-11-21 01:32:36.270754] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:52.670 [2024-11-21 01:32:36.270870] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61276 ] 00:06:52.670 [2024-11-21 01:32:36.424180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.670 [2024-11-21 01:32:36.507723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.236 [2024-11-21 01:32:37.006286] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:53.236 [2024-11-21 01:32:37.006332] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:53.236 [2024-11-21 01:32:37.006355] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:53.236 [2024-11-21 01:32:37.008767] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:53.236 [2024-11-21 01:32:37.009653] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:53.236 [2024-11-21 01:32:37.009680] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:53.236 [2024-11-21 01:32:37.010321] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:53.236 00:06:53.236 [2024-11-21 01:32:37.010348] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:53.804 00:06:53.804 real 0m1.502s 00:06:53.804 user 0m1.226s 00:06:53.804 sys 0m0.170s 00:06:53.804 01:32:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.804 ************************************ 00:06:53.804 END TEST bdev_hello_world 00:06:53.804 ************************************ 00:06:53.804 01:32:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:53.804 01:32:37 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:53.804 01:32:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:53.804 01:32:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.804 01:32:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.063 ************************************ 00:06:54.063 START TEST bdev_bounds 00:06:54.063 ************************************ 00:06:54.063 01:32:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:54.063 Process bdevio pid: 61319 00:06:54.063 01:32:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61319 00:06:54.063 01:32:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:54.063 01:32:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61319' 00:06:54.063 01:32:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61319 00:06:54.063 01:32:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61319 ']' 00:06:54.063 01:32:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.063 01:32:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.063 01:32:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.063 01:32:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.063 01:32:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:54.063 01:32:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:54.063 [2024-11-21 01:32:37.827432] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:54.063 [2024-11-21 01:32:37.827545] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61319 ] 00:06:54.063 [2024-11-21 01:32:37.988211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.323 [2024-11-21 01:32:38.088356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.323 [2024-11-21 01:32:38.088729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.323 [2024-11-21 01:32:38.088819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.891 01:32:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.891 01:32:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:54.891 01:32:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:54.891 I/O targets: 00:06:54.891 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:54.891 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:54.891 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:54.891 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:54.891 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:54.891 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:54.891 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:54.891 00:06:54.891 00:06:54.891 CUnit - A unit testing framework for C - Version 2.1-3 00:06:54.891 http://cunit.sourceforge.net/ 00:06:54.891 00:06:54.891 00:06:54.891 Suite: bdevio tests on: Nvme3n1 00:06:54.891 Test: blockdev write read block ...passed 00:06:54.891 Test: blockdev write zeroes read block ...passed 00:06:54.891 Test: blockdev write zeroes read no split ...passed 00:06:54.891 Test: blockdev write zeroes read split ...passed 00:06:54.891 Test: blockdev write zeroes read split partial ...passed 00:06:54.891 Test: blockdev reset ...[2024-11-21 01:32:38.809089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:54.891 [2024-11-21 01:32:38.811690] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:54.891 passed 00:06:54.891 Test: blockdev write read 8 blocks ...passed 00:06:54.891 Test: blockdev write read size > 128k ...passed 00:06:54.891 Test: blockdev write read invalid size ...passed 00:06:54.891 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:54.891 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:54.891 Test: blockdev write read max offset ...passed 00:06:54.891 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:54.891 Test: blockdev writev readv 8 blocks ...passed 00:06:54.891 Test: blockdev writev readv 30 x 1block ...passed 00:06:54.891 Test: blockdev writev readv block ...passed 00:06:54.891 Test: blockdev writev readv size > 128k ...passed 00:06:54.891 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:54.891 Test: blockdev comparev and writev ...[2024-11-21 01:32:38.823272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b6a04000 len:0x1000 00:06:54.891 [2024-11-21 01:32:38.823313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:54.891 passed 00:06:54.891 Test: blockdev nvme passthru rw ...passed 00:06:54.891 Test: blockdev nvme passthru vendor specific ...passed 00:06:54.891 Test: blockdev nvme admin passthru ...[2024-11-21 01:32:38.825703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:54.891 [2024-11-21 01:32:38.825732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:54.891 passed 00:06:54.891 Test: blockdev copy ...passed 00:06:54.891 Suite: bdevio tests on: Nvme2n3 00:06:54.891 Test: blockdev write read block ...passed 00:06:54.891 Test: blockdev write zeroes read block ...passed 00:06:54.891 Test: blockdev write zeroes read no split ...passed 00:06:55.150 Test: blockdev write zeroes read split ...passed 00:06:55.150 Test: blockdev write zeroes read split partial ...passed 00:06:55.150 Test: blockdev reset ...[2024-11-21 01:32:38.881769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:55.150 [2024-11-21 01:32:38.886390] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:55.150 passed 00:06:55.150 Test: blockdev write read 8 blocks ...passed 00:06:55.150 Test: blockdev write read size > 128k ...passed 00:06:55.150 Test: blockdev write read invalid size ...passed 00:06:55.150 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:55.150 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:55.150 Test: blockdev write read max offset ...passed 00:06:55.150 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:55.150 Test: blockdev writev readv 8 blocks ...passed 00:06:55.150 Test: blockdev writev readv 30 x 1block ...passed 00:06:55.150 Test: blockdev writev readv block ...passed 00:06:55.150 Test: blockdev writev readv size > 128k ...passed 00:06:55.150 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:55.150 Test: blockdev comparev and writev ...[2024-11-21 01:32:38.893232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b6a02000 len:0x1000 00:06:55.150 [2024-11-21 01:32:38.893268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:55.150 passed 00:06:55.150 Test: blockdev nvme passthru rw ...passed 00:06:55.150 Test: blockdev nvme passthru vendor specific ...passed 00:06:55.150 Test: blockdev nvme admin passthru ...[2024-11-21 01:32:38.893874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:55.150 [2024-11-21 01:32:38.893900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:55.150 passed 00:06:55.150 Test: blockdev copy ...passed 00:06:55.150 Suite: bdevio tests on: Nvme2n2 00:06:55.150 Test: blockdev write read block ...passed 00:06:55.150 Test: blockdev write zeroes read block ...passed 00:06:55.150 Test: blockdev write zeroes read no split ...passed 00:06:55.150 Test: blockdev write zeroes read split ...passed 00:06:55.150 Test: blockdev write zeroes read split partial ...passed 00:06:55.150 Test: blockdev reset ...[2024-11-21 01:32:38.952063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:55.150 passed 00:06:55.150 Test: blockdev write read 8 blocks ...[2024-11-21 01:32:38.955586] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:55.150 passed 00:06:55.150 Test: blockdev write read size > 128k ...passed 00:06:55.150 Test: blockdev write read invalid size ...passed 00:06:55.150 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:55.150 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:55.150 Test: blockdev write read max offset ...passed 00:06:55.150 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:55.150 Test: blockdev writev readv 8 blocks ...passed 00:06:55.150 Test: blockdev writev readv 30 x 1block ...passed 00:06:55.150 Test: blockdev writev readv block ...passed 00:06:55.150 Test: blockdev writev readv size > 128k ...passed 00:06:55.150 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:55.150 Test: blockdev comparev and writev ...[2024-11-21 01:32:38.962376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cea38000 len:0x1000 00:06:55.150 [2024-11-21 01:32:38.962407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:55.150 passed 00:06:55.150 Test: blockdev nvme passthru rw ...passed 00:06:55.150 Test: blockdev nvme passthru vendor specific ...passed 00:06:55.150 Test: blockdev nvme admin passthru ...[2024-11-21 01:32:38.963111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:55.150 [2024-11-21 01:32:38.963129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:55.150 passed 00:06:55.150 Test: blockdev copy ...passed 00:06:55.150 Suite: bdevio tests on: Nvme2n1 00:06:55.150 Test: blockdev write read block ...passed 00:06:55.150 Test: blockdev write zeroes read block ...passed 00:06:55.150 Test: blockdev write zeroes read no split ...passed 00:06:55.150 Test: blockdev write zeroes read split ...passed 00:06:55.150 Test: blockdev write zeroes read split partial ...passed 00:06:55.150 Test: blockdev reset ...[2024-11-21 01:32:39.023911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:55.150 passed 00:06:55.150 Test: blockdev write read 8 blocks ...[2024-11-21 01:32:39.027506] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:55.150 passed 00:06:55.150 Test: blockdev write read size > 128k ...passed 00:06:55.150 Test: blockdev write read invalid size ...passed 00:06:55.150 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:55.150 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:55.150 Test: blockdev write read max offset ...passed 00:06:55.150 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:55.150 Test: blockdev writev readv 8 blocks ...passed 00:06:55.150 Test: blockdev writev readv 30 x 1block ...passed 00:06:55.150 Test: blockdev writev readv block ...passed 00:06:55.150 Test: blockdev writev readv size > 128k ...passed 00:06:55.150 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:55.150 Test: blockdev comparev and writev ...[2024-11-21 01:32:39.034721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cea34000 len:0x1000 00:06:55.150 [2024-11-21 01:32:39.034754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:55.150 passed 00:06:55.150 Test: blockdev nvme passthru rw ...passed 00:06:55.150 Test: blockdev nvme passthru vendor specific ...passed 00:06:55.150 Test: blockdev nvme admin passthru ...[2024-11-21 01:32:39.035633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:55.150 [2024-11-21 01:32:39.035656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:55.150 passed 00:06:55.150 Test: blockdev copy ...passed 00:06:55.150 Suite: bdevio tests on: Nvme1n1p2 00:06:55.150 Test: blockdev write read block ...passed 00:06:55.150 Test: blockdev write zeroes read block ...passed 00:06:55.150 Test: blockdev write zeroes read no split ...passed 00:06:55.150 Test: blockdev write zeroes read split ...passed 00:06:55.150 Test: blockdev write zeroes read split partial ...passed 00:06:55.150 Test: blockdev reset ...[2024-11-21 01:32:39.095427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:55.150 passed 00:06:55.150 Test: blockdev write read 8 blocks ...[2024-11-21 01:32:39.098494] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:55.151 passed 00:06:55.151 Test: blockdev write read size > 128k ...passed 00:06:55.151 Test: blockdev write read invalid size ...passed 00:06:55.151 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:55.151 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:55.151 Test: blockdev write read max offset ...passed 00:06:55.151 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:55.151 Test: blockdev writev readv 8 blocks ...passed 00:06:55.151 Test: blockdev writev readv 30 x 1block ...passed 00:06:55.415 Test: blockdev writev readv block ...passed 00:06:55.415 Test: blockdev writev readv size > 128k ...passed 00:06:55.415 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:55.415 Test: blockdev comparev and writev ...[2024-11-21 01:32:39.106108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cea30000 len:0x1000 00:06:55.415 [2024-11-21 01:32:39.106138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:55.415 passed 00:06:55.415 Test: blockdev nvme passthru rw ...passed 00:06:55.415 Test: blockdev nvme passthru vendor specific ...passed 00:06:55.415 Test: blockdev nvme admin passthru ...passed 00:06:55.415 Test: blockdev copy ...passed 00:06:55.415 Suite: bdevio tests on: Nvme1n1p1 00:06:55.415 Test: blockdev write read block ...passed 00:06:55.415 Test: blockdev write zeroes read block ...passed 00:06:55.415 Test: blockdev write zeroes read no split ...passed 00:06:55.415 Test: blockdev write zeroes read split ...passed 00:06:55.415 Test: blockdev write zeroes read split partial ...passed 00:06:55.415 Test: blockdev reset ...[2024-11-21 01:32:39.151825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:55.415 passed 00:06:55.415 Test: blockdev write read 8 blocks ...[2024-11-21 01:32:39.155084] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:55.415 passed 00:06:55.415 Test: blockdev write read size > 128k ...passed 00:06:55.415 Test: blockdev write read invalid size ...passed 00:06:55.415 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:55.415 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:55.415 Test: blockdev write read max offset ...passed 00:06:55.415 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:55.415 Test: blockdev writev readv 8 blocks ...passed 00:06:55.415 Test: blockdev writev readv 30 x 1block ...passed 00:06:55.415 Test: blockdev writev readv block ...passed 00:06:55.415 Test: blockdev writev readv size > 128k ...passed 00:06:55.415 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:55.415 Test: blockdev comparev and writev ...passed 00:06:55.415 Test: blockdev nvme passthru rw ...passed 00:06:55.415 Test: blockdev nvme passthru vendor specific ...passed 00:06:55.415 Test: blockdev nvme admin passthru ...passed 00:06:55.415 Test: blockdev copy ...[2024-11-21 01:32:39.162259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b6c0e000 len:0x1000 00:06:55.415 [2024-11-21 01:32:39.162290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:55.415 passed 00:06:55.415 Suite: bdevio tests on: Nvme0n1 00:06:55.415 Test: blockdev write read block ...passed 00:06:55.415 Test: blockdev write zeroes read block ...passed 00:06:55.415 Test: blockdev write zeroes read no split ...passed 00:06:55.415 Test: blockdev write zeroes read split ...passed 00:06:55.415 Test: blockdev write zeroes read split partial ...passed 00:06:55.415 Test: blockdev reset ...[2024-11-21 01:32:39.208605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:55.415 passed 00:06:55.415 Test: blockdev write read 8 blocks ...[2024-11-21 01:32:39.212065] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:55.415 passed 00:06:55.415 Test: blockdev write read size > 128k ...passed 00:06:55.415 Test: blockdev write read invalid size ...passed 00:06:55.415 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:55.415 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:55.415 Test: blockdev write read max offset ...passed 00:06:55.415 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:55.415 Test: blockdev writev readv 8 blocks ...passed 00:06:55.415 Test: blockdev writev readv 30 x 1block ...passed 00:06:55.415 Test: blockdev writev readv block ...passed 00:06:55.415 Test: blockdev writev readv size > 128k ...passed 00:06:55.415 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:55.415 Test: blockdev comparev and writev ...passed 00:06:55.415 Test: blockdev nvme passthru rw ...[2024-11-21 01:32:39.217890] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:55.415 separate metadata which is not supported yet. 00:06:55.415 passed 00:06:55.415 Test: blockdev nvme passthru vendor specific ...passed 00:06:55.415 Test: blockdev nvme admin passthru ...passed 00:06:55.415 Test: blockdev copy ...[2024-11-21 01:32:39.218488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:55.415 [2024-11-21 01:32:39.218518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:55.415 passed 00:06:55.415 00:06:55.415 Run Summary: Type Total Ran Passed Failed Inactive 00:06:55.415 suites 7 7 n/a 0 0 00:06:55.415 tests 161 161 161 0 0 00:06:55.415 asserts 1025 1025 1025 0 n/a 00:06:55.415 00:06:55.415 Elapsed time = 1.193 seconds 00:06:55.415 0 00:06:55.415 01:32:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61319 00:06:55.415 01:32:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61319 ']' 00:06:55.415 01:32:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61319 00:06:55.415 01:32:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:55.415 01:32:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.415 01:32:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61319 00:06:55.415 01:32:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.415 killing process with pid 61319 00:06:55.415 01:32:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.415 01:32:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61319' 00:06:55.415 01:32:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61319 00:06:55.415 01:32:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61319 00:06:55.982 01:32:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:55.982 00:06:55.982 real 0m2.141s 00:06:55.982 user 0m5.442s 00:06:55.982 sys 0m0.282s 00:06:55.982 ************************************ 00:06:55.982 END TEST bdev_bounds 00:06:55.982 ************************************ 00:06:55.982 01:32:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.983 01:32:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:56.241 01:32:39 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:56.241 01:32:39 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:56.241 01:32:39 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.241 01:32:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:56.241 ************************************ 00:06:56.241 START TEST bdev_nbd 00:06:56.241 ************************************ 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61373 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61373 /var/tmp/spdk-nbd.sock 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61373 ']' 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.241 01:32:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:56.241 [2024-11-21 01:32:40.040396] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:06:56.241 [2024-11-21 01:32:40.040520] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.500 [2024-11-21 01:32:40.199700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.500 [2024-11-21 01:32:40.297487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.068 01:32:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.068 01:32:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:57.068 01:32:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:57.068 01:32:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.068 01:32:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:57.068 01:32:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:57.068 01:32:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:57.068 01:32:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.068 01:32:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:57.068 01:32:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:57.068 01:32:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:57.068 01:32:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:57.068 01:32:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:57.068 01:32:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:57.068 01:32:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:57.326 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:57.326 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:57.326 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:57.326 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:57.326 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:57.326 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:57.326 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:57.326 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:57.326 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:57.326 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:57.326 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:57.326 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.326 1+0 records in 00:06:57.326 1+0 records out 00:06:57.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409465 s, 10.0 MB/s 00:06:57.326 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.326 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:57.326 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.326 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:57.327 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:57.327 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:57.327 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:57.327 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:57.585 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:57.585 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:57.585 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:57.586 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:57.586 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:57.586 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:57.586 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:57.586 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:57.586 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:57.586 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:57.586 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:57.586 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.586 1+0 records in 00:06:57.586 1+0 records out 00:06:57.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404909 s, 10.1 MB/s 00:06:57.586 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.586 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:57.586 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.586 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:57.586 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:57.586 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:57.586 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:57.586 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:57.844 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:57.844 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:57.844 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:57.844 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:57.844 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:57.844 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:57.844 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:57.844 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:57.844 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:57.844 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:57.844 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:57.844 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:57.844 1+0 records in 00:06:57.844 1+0 records out 00:06:57.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546167 s, 7.5 MB/s 00:06:57.844 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.844 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:57.844 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:57.845 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:57.845 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:57.845 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:57.845 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:57.845 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:57.845 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:57.845 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:58.103 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:58.103 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:58.103 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.103 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.103 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.103 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:58.103 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.103 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.103 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.103 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.103 1+0 records in 00:06:58.103 1+0 records out 00:06:58.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448671 s, 9.1 MB/s 00:06:58.103 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.103 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.103 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.103 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.103 01:32:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.103 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:58.103 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:58.103 01:32:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:58.103 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:58.103 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:58.103 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:58.103 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:58.103 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.103 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.103 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.103 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:58.103 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.103 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.103 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.103 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.103 1+0 records in 00:06:58.103 1+0 records out 00:06:58.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668366 s, 6.1 MB/s 00:06:58.103 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.103 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.103 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.103 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.104 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.104 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:58.104 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:58.104 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:58.363 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:58.363 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:58.363 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:58.363 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:58.363 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.363 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.363 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.363 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:58.363 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.363 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.363 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.363 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.363 1+0 records in 00:06:58.363 1+0 records out 00:06:58.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549002 s, 7.5 MB/s 00:06:58.363 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.363 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.363 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.363 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.363 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.363 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:58.363 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:58.363 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:58.621 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:58.621 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:58.621 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:58.621 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:58.621 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:58.621 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.621 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.621 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:58.621 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:58.621 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.621 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.621 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:58.621 1+0 records in 00:06:58.621 1+0 records out 00:06:58.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037871 s, 10.8 MB/s 00:06:58.621 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.621 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:58.621 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:58.621 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.621 01:32:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:58.621 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:58.621 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:58.621 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.879 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:58.879 { 00:06:58.879 "nbd_device": "/dev/nbd0", 00:06:58.879 "bdev_name": "Nvme0n1" 00:06:58.879 }, 00:06:58.879 { 00:06:58.879 "nbd_device": "/dev/nbd1", 00:06:58.879 "bdev_name": "Nvme1n1p1" 00:06:58.879 }, 00:06:58.879 { 00:06:58.879 "nbd_device": "/dev/nbd2", 00:06:58.879 "bdev_name": "Nvme1n1p2" 00:06:58.879 }, 00:06:58.879 { 00:06:58.879 "nbd_device": "/dev/nbd3", 00:06:58.879 "bdev_name": "Nvme2n1" 00:06:58.879 }, 00:06:58.879 { 00:06:58.879 "nbd_device": "/dev/nbd4", 00:06:58.879 "bdev_name": "Nvme2n2" 00:06:58.879 }, 00:06:58.879 { 00:06:58.879 "nbd_device": "/dev/nbd5", 00:06:58.879 "bdev_name": "Nvme2n3" 00:06:58.879 }, 00:06:58.879 { 00:06:58.879 "nbd_device": "/dev/nbd6", 00:06:58.879 "bdev_name": "Nvme3n1" 00:06:58.879 } 00:06:58.879 ]' 00:06:58.880 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:58.880 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:58.880 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:58.880 { 00:06:58.880 "nbd_device": "/dev/nbd0", 00:06:58.880 "bdev_name": "Nvme0n1" 00:06:58.880 }, 00:06:58.880 { 00:06:58.880 "nbd_device": "/dev/nbd1", 00:06:58.880 "bdev_name": "Nvme1n1p1" 00:06:58.880 }, 00:06:58.880 { 00:06:58.880 "nbd_device": "/dev/nbd2", 00:06:58.880 "bdev_name": "Nvme1n1p2" 00:06:58.880 }, 00:06:58.880 { 00:06:58.880 "nbd_device": "/dev/nbd3", 00:06:58.880 "bdev_name": "Nvme2n1" 00:06:58.880 }, 00:06:58.880 { 00:06:58.880 "nbd_device": "/dev/nbd4", 00:06:58.880 "bdev_name": "Nvme2n2" 00:06:58.880 }, 00:06:58.880 { 00:06:58.880 "nbd_device": "/dev/nbd5", 00:06:58.880 "bdev_name": "Nvme2n3" 00:06:58.880 }, 00:06:58.880 { 00:06:58.880 "nbd_device": "/dev/nbd6", 00:06:58.880 "bdev_name": "Nvme3n1" 00:06:58.880 } 00:06:58.880 ]' 00:06:58.880 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:58.880 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.880 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:58.880 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:58.880 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:58.880 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.880 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.139 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.139 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.139 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.139 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.139 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.139 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.139 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.139 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.139 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.139 01:32:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:59.398 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:59.398 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:59.398 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:59.398 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.398 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.398 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:59.398 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.398 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.398 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.398 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:59.657 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:59.657 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:59.657 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:59.657 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.657 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.657 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:59.657 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.657 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.657 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.657 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:59.657 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:59.657 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:59.657 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:59.657 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.657 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.657 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:59.657 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.657 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.657 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.657 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:59.916 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:59.916 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:59.916 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:59.916 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.916 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.916 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:59.916 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:59.916 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.916 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.916 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:00.174 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:00.174 01:32:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:00.174 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:00.174 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.174 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.174 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:00.174 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.174 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.174 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.174 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:00.433 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:00.433 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:00.433 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:00.433 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.433 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.433 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:00.433 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:00.433 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.433 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.433 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.433 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.433 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.433 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.433 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:00.692 /dev/nbd0 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.692 1+0 records in 00:07:00.692 1+0 records out 00:07:00.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349584 s, 11.7 MB/s 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.692 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.693 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:00.693 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.693 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.693 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:00.951 /dev/nbd1 00:07:00.951 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:00.951 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:00.951 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:00.951 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:00.951 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.951 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.951 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:00.951 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:00.951 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.951 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.951 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.951 1+0 records in 00:07:00.951 1+0 records out 00:07:00.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338497 s, 12.1 MB/s 00:07:00.951 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.951 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:00.951 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.951 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.951 01:32:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:00.951 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.951 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:00.951 01:32:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:01.209 /dev/nbd10 00:07:01.209 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:01.209 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:01.210 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:01.210 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.210 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.210 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.210 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:01.210 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.210 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.210 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.210 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.210 1+0 records in 00:07:01.210 1+0 records out 00:07:01.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612635 s, 6.7 MB/s 00:07:01.210 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.210 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.210 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.210 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.210 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.210 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.210 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:01.210 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:01.468 /dev/nbd11 00:07:01.468 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:01.468 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:01.468 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:01.468 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.468 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.469 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.469 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:01.469 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.469 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.469 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.469 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.469 1+0 records in 00:07:01.469 1+0 records out 00:07:01.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466125 s, 8.8 MB/s 00:07:01.469 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.469 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.469 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.469 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.469 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.469 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.469 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:01.469 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:01.728 /dev/nbd12 00:07:01.728 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:01.728 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:01.728 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:01.728 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.728 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.728 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.728 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:01.728 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.728 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.728 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.728 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.728 1+0 records in 00:07:01.728 1+0 records out 00:07:01.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336609 s, 12.2 MB/s 00:07:01.728 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.728 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.728 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.728 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.728 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.728 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.728 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:01.728 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:01.728 /dev/nbd13 00:07:01.987 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:01.987 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:01.987 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:01.987 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.987 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.987 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.987 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:01.987 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.987 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.987 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.987 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.987 1+0 records in 00:07:01.987 1+0 records out 00:07:01.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384208 s, 10.7 MB/s 00:07:01.987 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.987 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.987 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.987 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.987 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.987 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.987 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:01.988 /dev/nbd14 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.988 1+0 records in 00:07:01.988 1+0 records out 00:07:01.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469958 s, 8.7 MB/s 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.988 01:32:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.247 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:02.247 { 00:07:02.247 "nbd_device": "/dev/nbd0", 00:07:02.247 "bdev_name": "Nvme0n1" 00:07:02.247 }, 00:07:02.247 { 00:07:02.247 "nbd_device": "/dev/nbd1", 00:07:02.247 "bdev_name": "Nvme1n1p1" 00:07:02.247 }, 00:07:02.247 { 00:07:02.247 "nbd_device": "/dev/nbd10", 00:07:02.247 "bdev_name": "Nvme1n1p2" 00:07:02.247 }, 00:07:02.247 { 00:07:02.247 "nbd_device": "/dev/nbd11", 00:07:02.247 "bdev_name": "Nvme2n1" 00:07:02.247 }, 00:07:02.247 { 00:07:02.247 "nbd_device": "/dev/nbd12", 00:07:02.247 "bdev_name": "Nvme2n2" 00:07:02.247 }, 00:07:02.247 { 00:07:02.247 "nbd_device": "/dev/nbd13", 00:07:02.247 "bdev_name": "Nvme2n3" 00:07:02.247 }, 00:07:02.247 { 00:07:02.247 "nbd_device": "/dev/nbd14", 00:07:02.247 "bdev_name": "Nvme3n1" 00:07:02.247 } 00:07:02.247 ]' 00:07:02.247 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:02.247 { 00:07:02.247 "nbd_device": "/dev/nbd0", 00:07:02.247 "bdev_name": "Nvme0n1" 00:07:02.247 }, 00:07:02.247 { 00:07:02.247 "nbd_device": "/dev/nbd1", 00:07:02.247 "bdev_name": "Nvme1n1p1" 00:07:02.247 }, 00:07:02.247 { 00:07:02.247 "nbd_device": "/dev/nbd10", 00:07:02.247 "bdev_name": "Nvme1n1p2" 00:07:02.247 }, 00:07:02.247 { 00:07:02.247 "nbd_device": "/dev/nbd11", 00:07:02.247 "bdev_name": "Nvme2n1" 00:07:02.247 }, 00:07:02.247 { 00:07:02.247 "nbd_device": "/dev/nbd12", 00:07:02.247 "bdev_name": "Nvme2n2" 00:07:02.247 }, 00:07:02.247 { 00:07:02.247 "nbd_device": "/dev/nbd13", 00:07:02.247 "bdev_name": "Nvme2n3" 00:07:02.247 }, 00:07:02.247 { 00:07:02.247 "nbd_device": "/dev/nbd14", 00:07:02.247 "bdev_name": "Nvme3n1" 00:07:02.247 } 00:07:02.247 ]' 00:07:02.247 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.247 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:02.247 /dev/nbd1 00:07:02.247 /dev/nbd10 00:07:02.247 /dev/nbd11 00:07:02.247 /dev/nbd12 00:07:02.247 /dev/nbd13 00:07:02.247 /dev/nbd14' 00:07:02.247 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:02.247 /dev/nbd1 00:07:02.247 /dev/nbd10 00:07:02.247 /dev/nbd11 00:07:02.247 /dev/nbd12 00:07:02.247 /dev/nbd13 00:07:02.247 /dev/nbd14' 00:07:02.247 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.247 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:02.247 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:02.247 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:02.247 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:02.247 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:02.247 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:02.247 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.247 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:02.247 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:02.247 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:02.247 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:02.247 256+0 records in 00:07:02.247 256+0 records out 00:07:02.247 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00771268 s, 136 MB/s 00:07:02.247 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.247 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:02.506 256+0 records in 00:07:02.506 256+0 records out 00:07:02.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0815825 s, 12.9 MB/s 00:07:02.506 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.506 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:02.506 256+0 records in 00:07:02.506 256+0 records out 00:07:02.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0821186 s, 12.8 MB/s 00:07:02.506 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.506 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:02.506 256+0 records in 00:07:02.506 256+0 records out 00:07:02.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.082046 s, 12.8 MB/s 00:07:02.506 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.506 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:02.765 256+0 records in 00:07:02.765 256+0 records out 00:07:02.765 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0818554 s, 12.8 MB/s 00:07:02.765 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.765 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:02.765 256+0 records in 00:07:02.765 256+0 records out 00:07:02.765 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0803706 s, 13.0 MB/s 00:07:02.765 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.765 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:02.765 256+0 records in 00:07:02.765 256+0 records out 00:07:02.765 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.082031 s, 12.8 MB/s 00:07:02.765 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.765 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:03.024 256+0 records in 00:07:03.024 256+0 records out 00:07:03.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0809796 s, 12.9 MB/s 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.024 01:32:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:03.282 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.282 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.282 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.282 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.283 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.283 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.283 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.283 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.283 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.283 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:03.541 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:03.541 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:03.541 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:03.541 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.541 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.541 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:03.541 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.541 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.541 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.541 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:03.541 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:03.541 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:03.541 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:03.541 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.541 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.541 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:03.541 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.541 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.542 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.542 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:03.800 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:03.800 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:03.800 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:03.800 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.800 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.800 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:03.800 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.800 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.800 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.800 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:04.059 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:04.059 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:04.059 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:04.059 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.059 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.059 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:04.059 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.059 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.059 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.059 01:32:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.317 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.657 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:04.657 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:04.657 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.657 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:04.657 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:04.657 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.657 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:04.657 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:04.657 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:04.657 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:04.657 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:04.657 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:04.657 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:04.657 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.657 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:04.657 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:04.946 malloc_lvol_verify 00:07:04.947 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:04.947 d74bdf8d-42b4-4dcd-91ca-cdcf142c7e2b 00:07:04.947 01:32:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:05.204 03ab5b4a-1730-4e6d-9b1b-5e05c2969950 00:07:05.204 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:05.463 /dev/nbd0 00:07:05.463 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:05.463 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:05.463 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:05.463 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:05.463 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:05.463 mke2fs 1.47.0 (5-Feb-2023) 00:07:05.463 Discarding device blocks: 0/4096 done 00:07:05.463 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:05.463 00:07:05.463 Allocating group tables: 0/1 done 00:07:05.463 Writing inode tables: 0/1 done 00:07:05.463 Creating journal (1024 blocks): done 00:07:05.463 Writing superblocks and filesystem accounting information: 0/1 done 00:07:05.463 00:07:05.463 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:05.463 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.463 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:05.463 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.463 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:05.463 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.463 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:05.722 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.722 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.722 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.722 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.722 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.722 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.722 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.722 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.722 01:32:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61373 00:07:05.722 01:32:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61373 ']' 00:07:05.722 01:32:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61373 00:07:05.722 01:32:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:05.722 01:32:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.722 01:32:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61373 00:07:05.722 01:32:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.722 01:32:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.722 killing process with pid 61373 00:07:05.722 01:32:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61373' 00:07:05.722 01:32:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61373 00:07:05.722 01:32:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61373 00:07:06.656 01:32:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:06.656 00:07:06.656 real 0m10.276s 00:07:06.656 user 0m14.566s 00:07:06.656 sys 0m3.363s 00:07:06.656 01:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.656 01:32:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:06.656 ************************************ 00:07:06.656 END TEST bdev_nbd 00:07:06.656 ************************************ 00:07:06.656 01:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:06.656 01:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:06.656 01:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:06.656 skipping fio tests on NVMe due to multi-ns failures. 00:07:06.656 01:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:06.656 01:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:06.656 01:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:06.656 01:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:06.656 01:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.656 01:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.656 ************************************ 00:07:06.656 START TEST bdev_verify 00:07:06.656 ************************************ 00:07:06.656 01:32:50 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:06.656 [2024-11-21 01:32:50.372997] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:06.656 [2024-11-21 01:32:50.373535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61779 ] 00:07:06.656 [2024-11-21 01:32:50.533649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:06.915 [2024-11-21 01:32:50.635693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.915 [2024-11-21 01:32:50.635700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.480 Running I/O for 5 seconds... 00:07:09.799 21888.00 IOPS, 85.50 MiB/s [2024-11-21T01:32:54.688Z] 22944.00 IOPS, 89.62 MiB/s [2024-11-21T01:32:55.622Z] 22272.00 IOPS, 87.00 MiB/s [2024-11-21T01:32:56.556Z] 21952.00 IOPS, 85.75 MiB/s [2024-11-21T01:32:56.556Z] 21913.60 IOPS, 85.60 MiB/s 00:07:12.599 Latency(us) 00:07:12.599 [2024-11-21T01:32:56.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.599 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.599 Verification LBA range: start 0x0 length 0xbd0bd 00:07:12.599 Nvme0n1 : 5.09 1546.38 6.04 0.00 0.00 82292.18 9326.28 79449.80 00:07:12.599 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.599 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:12.599 Nvme0n1 : 5.08 1537.51 6.01 0.00 0.00 82974.95 15627.82 99211.42 00:07:12.599 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.599 Verification LBA range: start 0x0 length 0x4ff80 00:07:12.599 Nvme1n1p1 : 5.11 1553.99 6.07 0.00 0.00 82070.22 13107.20 77030.01 00:07:12.599 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.599 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:12.599 Nvme1n1p1 : 5.08 1537.04 6.00 0.00 0.00 82780.86 18350.08 87515.77 00:07:12.599 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.599 Verification LBA range: start 0x0 length 0x4ff7f 00:07:12.599 Nvme1n1p2 : 5.11 1553.53 6.07 0.00 0.00 81937.82 11544.42 72997.02 00:07:12.599 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.599 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:12.599 Nvme1n1p2 : 5.08 1536.60 6.00 0.00 0.00 82609.84 19156.68 81869.59 00:07:12.599 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.599 Verification LBA range: start 0x0 length 0x80000 00:07:12.599 Nvme2n1 : 5.11 1553.12 6.07 0.00 0.00 81814.24 12048.54 70173.93 00:07:12.599 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.599 Verification LBA range: start 0x80000 length 0x80000 00:07:12.599 Nvme2n1 : 5.08 1536.20 6.00 0.00 0.00 82401.70 18753.38 66947.54 00:07:12.599 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.599 Verification LBA range: start 0x0 length 0x80000 00:07:12.599 Nvme2n2 : 5.11 1552.22 6.06 0.00 0.00 81725.60 14014.62 72190.42 00:07:12.599 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.599 Verification LBA range: start 0x80000 length 0x80000 00:07:12.599 Nvme2n2 : 5.08 1535.75 6.00 0.00 0.00 82247.71 18350.08 68560.74 00:07:12.599 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.599 Verification LBA range: start 0x0 length 0x80000 00:07:12.599 Nvme2n3 : 5.11 1551.81 6.06 0.00 0.00 81605.80 14417.92 77030.01 00:07:12.599 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.599 Verification LBA range: start 0x80000 length 0x80000 00:07:12.599 Nvme2n3 : 5.09 1545.44 6.04 0.00 0.00 81681.69 2697.06 71383.83 00:07:12.599 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:12.599 Verification LBA range: start 0x0 length 0x20000 00:07:12.599 Nvme3n1 : 5.12 1551.40 6.06 0.00 0.00 81459.07 11695.66 79853.10 00:07:12.599 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:12.599 Verification LBA range: start 0x20000 length 0x20000 00:07:12.599 Nvme3n1 : 5.10 1545.01 6.04 0.00 0.00 81548.36 2734.87 72190.42 00:07:12.599 [2024-11-21T01:32:56.556Z] =================================================================================================================== 00:07:12.599 [2024-11-21T01:32:56.556Z] Total : 21636.01 84.52 0.00 0.00 82079.54 2697.06 99211.42 00:07:13.974 00:07:13.974 real 0m7.363s 00:07:13.974 user 0m13.807s 00:07:13.974 sys 0m0.219s 00:07:13.974 01:32:57 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.974 01:32:57 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:13.974 ************************************ 00:07:13.974 END TEST bdev_verify 00:07:13.974 ************************************ 00:07:13.974 01:32:57 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:13.974 01:32:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:13.974 01:32:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.974 01:32:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.974 ************************************ 00:07:13.974 START TEST bdev_verify_big_io 00:07:13.974 ************************************ 00:07:13.974 01:32:57 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:13.974 [2024-11-21 01:32:57.797875] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:13.974 [2024-11-21 01:32:57.797996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61877 ] 00:07:14.232 [2024-11-21 01:32:57.957095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.232 [2024-11-21 01:32:58.056277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.232 [2024-11-21 01:32:58.056428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.799 Running I/O for 5 seconds... 00:07:20.377 1419.00 IOPS, 88.69 MiB/s [2024-11-21T01:33:05.268Z] 2159.00 IOPS, 134.94 MiB/s [2024-11-21T01:33:05.268Z] 2993.67 IOPS, 187.10 MiB/s 00:07:21.311 Latency(us) 00:07:21.311 [2024-11-21T01:33:05.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.311 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.311 Verification LBA range: start 0x0 length 0xbd0b 00:07:21.311 Nvme0n1 : 5.86 97.39 6.09 0.00 0.00 1237516.39 23189.66 1367988.38 00:07:21.311 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.311 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:21.311 Nvme0n1 : 6.11 129.03 8.06 0.00 0.00 812739.43 34482.02 1206669.00 00:07:21.311 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.311 Verification LBA range: start 0x0 length 0x4ff8 00:07:21.311 Nvme1n1p1 : 5.77 99.85 6.24 0.00 0.00 1194069.42 112923.57 1155046.79 00:07:21.311 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.311 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:21.311 Nvme1n1p1 : 6.20 160.84 10.05 0.00 0.00 633509.75 819.20 1297007.85 00:07:21.311 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.311 Verification LBA range: start 0x0 length 0x4ff7 00:07:21.311 Nvme1n1p2 : 5.94 103.80 6.49 0.00 0.00 1109109.01 94371.84 1013085.74 00:07:21.311 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.311 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:21.311 Nvme1n1p2 : 5.72 100.96 6.31 0.00 0.00 1201443.21 21072.34 1361535.61 00:07:21.311 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.311 Verification LBA range: start 0x0 length 0x8000 00:07:21.311 Nvme2n1 : 5.95 107.61 6.73 0.00 0.00 1046101.07 79853.10 1277649.53 00:07:21.311 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.311 Verification LBA range: start 0x8000 length 0x8000 00:07:21.311 Nvme2n1 : 5.83 105.42 6.59 0.00 0.00 1115234.68 84692.68 1135688.47 00:07:21.311 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.311 Verification LBA range: start 0x0 length 0x8000 00:07:21.311 Nvme2n2 : 6.02 110.21 6.89 0.00 0.00 988153.31 66140.95 1187310.67 00:07:21.311 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.311 Verification LBA range: start 0x8000 length 0x8000 00:07:21.311 Nvme2n2 : 5.94 108.51 6.78 0.00 0.00 1046378.18 106470.79 1290555.08 00:07:21.311 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.311 Verification LBA range: start 0x0 length 0x8000 00:07:21.311 Nvme2n3 : 6.08 113.13 7.07 0.00 0.00 933789.23 33272.12 2271376.94 00:07:21.311 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.311 Verification LBA range: start 0x8000 length 0x8000 00:07:21.311 Nvme2n3 : 5.95 111.32 6.96 0.00 0.00 993627.50 107277.39 1155046.79 00:07:21.311 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:21.311 Verification LBA range: start 0x0 length 0x2000 00:07:21.311 Nvme3n1 : 6.19 136.90 8.56 0.00 0.00 747131.55 686.87 2090699.22 00:07:21.311 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:21.311 Verification LBA range: start 0x2000 length 0x2000 00:07:21.311 Nvme3n1 : 6.08 122.57 7.66 0.00 0.00 883305.51 39321.60 1180857.90 00:07:21.311 [2024-11-21T01:33:05.268Z] =================================================================================================================== 00:07:21.312 [2024-11-21T01:33:05.269Z] Total : 1607.54 100.47 0.00 0.00 967773.89 686.87 2271376.94 00:07:22.686 00:07:22.686 real 0m8.806s 00:07:22.686 user 0m16.673s 00:07:22.686 sys 0m0.231s 00:07:22.686 01:33:06 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.686 01:33:06 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:22.686 ************************************ 00:07:22.686 END TEST bdev_verify_big_io 00:07:22.686 ************************************ 00:07:22.686 01:33:06 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:22.686 01:33:06 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:22.686 01:33:06 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.686 01:33:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.686 ************************************ 00:07:22.686 START TEST bdev_write_zeroes 00:07:22.686 ************************************ 00:07:22.686 01:33:06 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:22.944 [2024-11-21 01:33:06.672972] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:22.944 [2024-11-21 01:33:06.673088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61988 ] 00:07:22.944 [2024-11-21 01:33:06.833683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.201 [2024-11-21 01:33:06.932842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.766 Running I/O for 1 seconds... 00:07:24.699 60912.00 IOPS, 237.94 MiB/s 00:07:24.699 Latency(us) 00:07:24.699 [2024-11-21T01:33:08.656Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.699 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:24.699 Nvme0n1 : 1.02 8683.72 33.92 0.00 0.00 14707.21 6377.16 30045.74 00:07:24.699 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:24.699 Nvme1n1p1 : 1.02 8688.59 33.94 0.00 0.00 14678.16 9124.63 26819.35 00:07:24.699 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:24.699 Nvme1n1p2 : 1.03 8677.97 33.90 0.00 0.00 14653.01 8015.56 25609.45 00:07:24.699 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:24.699 Nvme2n1 : 1.03 8668.21 33.86 0.00 0.00 14642.41 7410.61 24802.86 00:07:24.699 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:24.699 Nvme2n2 : 1.03 8658.47 33.82 0.00 0.00 14639.54 7360.20 25206.15 00:07:24.699 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:24.699 Nvme2n3 : 1.03 8648.35 33.78 0.00 0.00 14637.31 8519.68 26617.70 00:07:24.699 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:24.699 Nvme3n1 : 1.03 8638.65 33.74 0.00 0.00 14630.44 9427.10 27827.59 00:07:24.699 [2024-11-21T01:33:08.656Z] =================================================================================================================== 00:07:24.699 [2024-11-21T01:33:08.656Z] Total : 60663.95 236.97 0.00 0.00 14655.43 6377.16 30045.74 00:07:25.633 00:07:25.633 real 0m2.669s 00:07:25.633 user 0m2.378s 00:07:25.633 sys 0m0.176s 00:07:25.633 01:33:09 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.633 ************************************ 00:07:25.633 END TEST bdev_write_zeroes 00:07:25.633 ************************************ 00:07:25.633 01:33:09 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:25.633 01:33:09 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.633 01:33:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:25.633 01:33:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.633 01:33:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:25.633 ************************************ 00:07:25.633 START TEST bdev_json_nonenclosed 00:07:25.633 ************************************ 00:07:25.633 01:33:09 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.633 [2024-11-21 01:33:09.406943] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:25.633 [2024-11-21 01:33:09.407059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62041 ] 00:07:25.633 [2024-11-21 01:33:09.567228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.891 [2024-11-21 01:33:09.662082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.891 [2024-11-21 01:33:09.662158] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:25.891 [2024-11-21 01:33:09.662175] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:25.891 [2024-11-21 01:33:09.662184] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.891 00:07:25.891 real 0m0.493s 00:07:25.891 user 0m0.301s 00:07:25.891 sys 0m0.088s 00:07:25.891 01:33:09 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.891 01:33:09 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:25.891 ************************************ 00:07:25.891 END TEST bdev_json_nonenclosed 00:07:25.891 ************************************ 00:07:26.149 01:33:09 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.149 01:33:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:26.149 01:33:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.149 01:33:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:26.149 ************************************ 00:07:26.149 START TEST bdev_json_nonarray 00:07:26.149 ************************************ 00:07:26.149 01:33:09 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.149 [2024-11-21 01:33:09.961181] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:26.149 [2024-11-21 01:33:09.961295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62063 ] 00:07:26.408 [2024-11-21 01:33:10.122367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.408 [2024-11-21 01:33:10.216558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.408 [2024-11-21 01:33:10.216653] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:26.408 [2024-11-21 01:33:10.216671] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:26.408 [2024-11-21 01:33:10.216681] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.667 00:07:26.667 real 0m0.492s 00:07:26.667 user 0m0.290s 00:07:26.667 sys 0m0.097s 00:07:26.667 01:33:10 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.667 01:33:10 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:26.667 ************************************ 00:07:26.667 END TEST bdev_json_nonarray 00:07:26.667 ************************************ 00:07:26.667 01:33:10 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:26.667 01:33:10 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:26.667 01:33:10 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:26.667 01:33:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.667 01:33:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.667 01:33:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:26.667 ************************************ 00:07:26.667 START TEST bdev_gpt_uuid 00:07:26.667 ************************************ 00:07:26.667 01:33:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:26.667 01:33:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:26.667 01:33:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:26.667 01:33:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62092 00:07:26.667 01:33:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:26.667 01:33:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62092 00:07:26.667 01:33:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62092 ']' 00:07:26.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.667 01:33:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.667 01:33:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.667 01:33:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.667 01:33:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.667 01:33:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:26.667 01:33:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:26.667 [2024-11-21 01:33:10.539918] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:26.667 [2024-11-21 01:33:10.540042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62092 ] 00:07:26.925 [2024-11-21 01:33:10.695206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.925 [2024-11-21 01:33:10.791587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.491 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.491 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:27.491 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:27.491 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.491 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:27.749 Some configs were skipped because the RPC state that can call them passed over. 00:07:27.749 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.749 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:27.749 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.749 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:28.008 { 00:07:28.008 "name": "Nvme1n1p1", 00:07:28.008 "aliases": [ 00:07:28.008 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:28.008 ], 00:07:28.008 "product_name": "GPT Disk", 00:07:28.008 "block_size": 4096, 00:07:28.008 "num_blocks": 655104, 00:07:28.008 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:28.008 "assigned_rate_limits": { 00:07:28.008 "rw_ios_per_sec": 0, 00:07:28.008 "rw_mbytes_per_sec": 0, 00:07:28.008 "r_mbytes_per_sec": 0, 00:07:28.008 "w_mbytes_per_sec": 0 00:07:28.008 }, 00:07:28.008 "claimed": false, 00:07:28.008 "zoned": false, 00:07:28.008 "supported_io_types": { 00:07:28.008 "read": true, 00:07:28.008 "write": true, 00:07:28.008 "unmap": true, 00:07:28.008 "flush": true, 00:07:28.008 "reset": true, 00:07:28.008 "nvme_admin": false, 00:07:28.008 "nvme_io": false, 00:07:28.008 "nvme_io_md": false, 00:07:28.008 "write_zeroes": true, 00:07:28.008 "zcopy": false, 00:07:28.008 "get_zone_info": false, 00:07:28.008 "zone_management": false, 00:07:28.008 "zone_append": false, 00:07:28.008 "compare": true, 00:07:28.008 "compare_and_write": false, 00:07:28.008 "abort": true, 00:07:28.008 "seek_hole": false, 00:07:28.008 "seek_data": false, 00:07:28.008 "copy": true, 00:07:28.008 "nvme_iov_md": false 00:07:28.008 }, 00:07:28.008 "driver_specific": { 00:07:28.008 "gpt": { 00:07:28.008 "base_bdev": "Nvme1n1", 00:07:28.008 "offset_blocks": 256, 00:07:28.008 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:28.008 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:28.008 "partition_name": "SPDK_TEST_first" 00:07:28.008 } 00:07:28.008 } 00:07:28.008 } 00:07:28.008 ]' 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:28.008 { 00:07:28.008 "name": "Nvme1n1p2", 00:07:28.008 "aliases": [ 00:07:28.008 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:28.008 ], 00:07:28.008 "product_name": "GPT Disk", 00:07:28.008 "block_size": 4096, 00:07:28.008 "num_blocks": 655103, 00:07:28.008 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:28.008 "assigned_rate_limits": { 00:07:28.008 "rw_ios_per_sec": 0, 00:07:28.008 "rw_mbytes_per_sec": 0, 00:07:28.008 "r_mbytes_per_sec": 0, 00:07:28.008 "w_mbytes_per_sec": 0 00:07:28.008 }, 00:07:28.008 "claimed": false, 00:07:28.008 "zoned": false, 00:07:28.008 "supported_io_types": { 00:07:28.008 "read": true, 00:07:28.008 "write": true, 00:07:28.008 "unmap": true, 00:07:28.008 "flush": true, 00:07:28.008 "reset": true, 00:07:28.008 "nvme_admin": false, 00:07:28.008 "nvme_io": false, 00:07:28.008 "nvme_io_md": false, 00:07:28.008 "write_zeroes": true, 00:07:28.008 "zcopy": false, 00:07:28.008 "get_zone_info": false, 00:07:28.008 "zone_management": false, 00:07:28.008 "zone_append": false, 00:07:28.008 "compare": true, 00:07:28.008 "compare_and_write": false, 00:07:28.008 "abort": true, 00:07:28.008 "seek_hole": false, 00:07:28.008 "seek_data": false, 00:07:28.008 "copy": true, 00:07:28.008 "nvme_iov_md": false 00:07:28.008 }, 00:07:28.008 "driver_specific": { 00:07:28.008 "gpt": { 00:07:28.008 "base_bdev": "Nvme1n1", 00:07:28.008 "offset_blocks": 655360, 00:07:28.008 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:28.008 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:28.008 "partition_name": "SPDK_TEST_second" 00:07:28.008 } 00:07:28.008 } 00:07:28.008 } 00:07:28.008 ]' 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62092 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62092 ']' 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62092 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62092 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:28.008 killing process with pid 62092 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62092' 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62092 00:07:28.008 01:33:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62092 00:07:29.910 00:07:29.910 real 0m2.954s 00:07:29.910 user 0m3.082s 00:07:29.910 sys 0m0.365s 00:07:29.910 01:33:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.910 ************************************ 00:07:29.910 END TEST bdev_gpt_uuid 00:07:29.910 ************************************ 00:07:29.910 01:33:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:29.910 01:33:13 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:29.910 01:33:13 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:29.910 01:33:13 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:29.910 01:33:13 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:29.910 01:33:13 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:29.910 01:33:13 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:29.910 01:33:13 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:29.910 01:33:13 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:29.910 01:33:13 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:29.910 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:30.168 Waiting for block devices as requested 00:07:30.168 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:30.168 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:30.168 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:30.426 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:35.704 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:35.704 01:33:19 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:35.704 01:33:19 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:35.704 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:35.704 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:35.704 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:35.704 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:35.704 01:33:19 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:35.704 00:07:35.704 real 0m55.052s 00:07:35.704 user 1m10.403s 00:07:35.704 sys 0m7.520s 00:07:35.704 01:33:19 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.704 01:33:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:35.704 ************************************ 00:07:35.704 END TEST blockdev_nvme_gpt 00:07:35.704 ************************************ 00:07:35.704 01:33:19 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:35.704 01:33:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.704 01:33:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.704 01:33:19 -- common/autotest_common.sh@10 -- # set +x 00:07:35.704 ************************************ 00:07:35.704 START TEST nvme 00:07:35.704 ************************************ 00:07:35.704 01:33:19 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:35.962 * Looking for test storage... 00:07:35.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:35.962 01:33:19 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:35.962 01:33:19 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:35.962 01:33:19 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:35.962 01:33:19 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:35.962 01:33:19 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.962 01:33:19 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.962 01:33:19 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.962 01:33:19 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.962 01:33:19 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.962 01:33:19 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.962 01:33:19 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.962 01:33:19 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.962 01:33:19 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.962 01:33:19 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.962 01:33:19 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.962 01:33:19 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:35.962 01:33:19 nvme -- scripts/common.sh@345 -- # : 1 00:07:35.962 01:33:19 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.962 01:33:19 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.962 01:33:19 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:35.962 01:33:19 nvme -- scripts/common.sh@353 -- # local d=1 00:07:35.962 01:33:19 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.962 01:33:19 nvme -- scripts/common.sh@355 -- # echo 1 00:07:35.962 01:33:19 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.962 01:33:19 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:35.962 01:33:19 nvme -- scripts/common.sh@353 -- # local d=2 00:07:35.962 01:33:19 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.962 01:33:19 nvme -- scripts/common.sh@355 -- # echo 2 00:07:35.962 01:33:19 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.962 01:33:19 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.962 01:33:19 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.962 01:33:19 nvme -- scripts/common.sh@368 -- # return 0 00:07:35.962 01:33:19 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.962 01:33:19 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:35.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.962 --rc genhtml_branch_coverage=1 00:07:35.962 --rc genhtml_function_coverage=1 00:07:35.962 --rc genhtml_legend=1 00:07:35.962 --rc geninfo_all_blocks=1 00:07:35.962 --rc geninfo_unexecuted_blocks=1 00:07:35.962 00:07:35.962 ' 00:07:35.962 01:33:19 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:35.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.962 --rc genhtml_branch_coverage=1 00:07:35.962 --rc genhtml_function_coverage=1 00:07:35.962 --rc genhtml_legend=1 00:07:35.962 --rc geninfo_all_blocks=1 00:07:35.962 --rc geninfo_unexecuted_blocks=1 00:07:35.962 00:07:35.962 ' 00:07:35.962 01:33:19 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:35.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.962 --rc genhtml_branch_coverage=1 00:07:35.962 --rc genhtml_function_coverage=1 00:07:35.962 --rc genhtml_legend=1 00:07:35.962 --rc geninfo_all_blocks=1 00:07:35.962 --rc geninfo_unexecuted_blocks=1 00:07:35.962 00:07:35.962 ' 00:07:35.962 01:33:19 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:35.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.962 --rc genhtml_branch_coverage=1 00:07:35.962 --rc genhtml_function_coverage=1 00:07:35.962 --rc genhtml_legend=1 00:07:35.962 --rc geninfo_all_blocks=1 00:07:35.962 --rc geninfo_unexecuted_blocks=1 00:07:35.962 00:07:35.962 ' 00:07:35.962 01:33:19 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:36.528 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:36.785 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.044 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.044 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.044 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:37.044 01:33:20 nvme -- nvme/nvme.sh@79 -- # uname 00:07:37.044 01:33:20 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:37.044 01:33:20 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:37.044 01:33:20 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:37.044 01:33:20 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:37.044 01:33:20 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:37.044 01:33:20 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:37.044 01:33:20 nvme -- common/autotest_common.sh@1075 -- # stubpid=62728 00:07:37.044 Waiting for stub to ready for secondary processes... 00:07:37.044 01:33:20 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:37.044 01:33:20 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:37.044 01:33:20 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62728 ]] 00:07:37.044 01:33:20 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:37.044 01:33:20 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:37.044 [2024-11-21 01:33:20.919103] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:07:37.044 [2024-11-21 01:33:20.919224] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:37.978 [2024-11-21 01:33:21.705051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:37.979 [2024-11-21 01:33:21.798146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.979 [2024-11-21 01:33:21.798471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.979 [2024-11-21 01:33:21.798556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.979 [2024-11-21 01:33:21.813498] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:37.979 [2024-11-21 01:33:21.813531] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:37.979 [2024-11-21 01:33:21.825732] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:37.979 [2024-11-21 01:33:21.825811] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:37.979 [2024-11-21 01:33:21.827799] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:37.979 [2024-11-21 01:33:21.828028] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:37.979 [2024-11-21 01:33:21.828066] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:37.979 [2024-11-21 01:33:21.830951] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:37.979 [2024-11-21 01:33:21.831192] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:37.979 [2024-11-21 01:33:21.831261] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:37.979 [2024-11-21 01:33:21.835697] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:37.979 [2024-11-21 01:33:21.836088] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:37.979 [2024-11-21 01:33:21.836174] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:37.979 [2024-11-21 01:33:21.836232] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:37.979 [2024-11-21 01:33:21.836278] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:37.979 done. 00:07:37.979 01:33:21 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:37.979 01:33:21 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:37.979 01:33:21 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:37.979 01:33:21 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:37.979 01:33:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.979 01:33:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.979 ************************************ 00:07:37.979 START TEST nvme_reset 00:07:37.979 ************************************ 00:07:37.979 01:33:21 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:38.237 Initializing NVMe Controllers 00:07:38.237 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:38.237 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:38.237 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:38.237 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:38.237 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:38.237 00:07:38.237 real 0m0.210s 00:07:38.237 user 0m0.066s 00:07:38.237 sys 0m0.099s 00:07:38.237 01:33:22 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.237 ************************************ 00:07:38.237 END TEST nvme_reset 00:07:38.237 ************************************ 00:07:38.237 01:33:22 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:38.237 01:33:22 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:38.237 01:33:22 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.237 01:33:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.237 01:33:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.237 ************************************ 00:07:38.237 START TEST nvme_identify 00:07:38.237 ************************************ 00:07:38.237 01:33:22 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:38.237 01:33:22 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:38.237 01:33:22 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:38.237 01:33:22 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:38.237 01:33:22 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:38.237 01:33:22 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:38.237 01:33:22 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:38.237 01:33:22 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:38.237 01:33:22 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:38.237 01:33:22 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:38.498 01:33:22 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:38.498 01:33:22 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:38.498 01:33:22 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:38.498 [2024-11-21 01:33:22.400633] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62749 terminated unexpected 00:07:38.498 ===================================================== 00:07:38.498 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:38.498 ===================================================== 00:07:38.498 Controller Capabilities/Features 00:07:38.498 ================================ 00:07:38.498 Vendor ID: 1b36 00:07:38.498 Subsystem Vendor ID: 1af4 00:07:38.498 Serial Number: 12341 00:07:38.498 Model Number: QEMU NVMe Ctrl 00:07:38.498 Firmware Version: 8.0.0 00:07:38.498 Recommended Arb Burst: 6 00:07:38.498 IEEE OUI Identifier: 00 54 52 00:07:38.498 Multi-path I/O 00:07:38.498 May have multiple subsystem ports: No 00:07:38.498 May have multiple controllers: No 00:07:38.498 Associated with SR-IOV VF: No 00:07:38.498 Max Data Transfer Size: 524288 00:07:38.498 Max Number of Namespaces: 256 00:07:38.498 Max Number of I/O Queues: 64 00:07:38.498 NVMe Specification Version (VS): 1.4 00:07:38.498 NVMe Specification Version (Identify): 1.4 00:07:38.498 Maximum Queue Entries: 2048 00:07:38.498 Contiguous Queues Required: Yes 00:07:38.498 Arbitration Mechanisms Supported 00:07:38.498 Weighted Round Robin: Not Supported 00:07:38.498 Vendor Specific: Not Supported 00:07:38.498 Reset Timeout: 7500 ms 00:07:38.498 Doorbell Stride: 4 bytes 00:07:38.498 NVM Subsystem Reset: Not Supported 00:07:38.498 Command Sets Supported 00:07:38.498 NVM Command Set: Supported 00:07:38.498 Boot Partition: Not Supported 00:07:38.498 Memory Page Size Minimum: 4096 bytes 00:07:38.498 Memory Page Size Maximum: 65536 bytes 00:07:38.498 Persistent Memory Region: Not Supported 00:07:38.498 Optional Asynchronous Events Supported 00:07:38.498 Namespace Attribute Notices: Supported 00:07:38.498 Firmware Activation Notices: Not Supported 00:07:38.498 ANA Change Notices: Not Supported 00:07:38.498 PLE Aggregate Log Change Notices: Not Supported 00:07:38.498 LBA Status Info Alert Notices: Not Supported 00:07:38.498 EGE Aggregate Log Change Notices: Not Supported 00:07:38.498 Normal NVM Subsystem Shutdown event: Not Supported 00:07:38.498 Zone Descriptor Change Notices: Not Supported 00:07:38.498 Discovery Log Change Notices: Not Supported 00:07:38.498 Controller Attributes 00:07:38.498 128-bit Host Identifier: Not Supported 00:07:38.498 Non-Operational Permissive Mode: Not Supported 00:07:38.498 NVM Sets: Not Supported 00:07:38.498 Read Recovery Levels: Not Supported 00:07:38.498 Endurance Groups: Not Supported 00:07:38.498 Predictable Latency Mode: Not Supported 00:07:38.498 Traffic Based Keep ALive: Not Supported 00:07:38.498 Namespace Granularity: Not Supported 00:07:38.498 SQ Associations: Not Supported 00:07:38.498 UUID List: Not Supported 00:07:38.498 Multi-Domain Subsystem: Not Supported 00:07:38.498 Fixed Capacity Management: Not Supported 00:07:38.498 Variable Capacity Management: Not Supported 00:07:38.498 Delete Endurance Group: Not Supported 00:07:38.498 Delete NVM Set: Not Supported 00:07:38.498 Extended LBA Formats Supported: Supported 00:07:38.498 Flexible Data Placement Supported: Not Supported 00:07:38.498 00:07:38.498 Controller Memory Buffer Support 00:07:38.498 ================================ 00:07:38.498 Supported: No 00:07:38.498 00:07:38.498 Persistent Memory Region Support 00:07:38.498 ================================ 00:07:38.498 Supported: No 00:07:38.498 00:07:38.498 Admin Command Set Attributes 00:07:38.498 ============================ 00:07:38.498 Security Send/Receive: Not Supported 00:07:38.498 Format NVM: Supported 00:07:38.498 Firmware Activate/Download: Not Supported 00:07:38.498 Namespace Management: Supported 00:07:38.498 Device Self-Test: Not Supported 00:07:38.498 Directives: Supported 00:07:38.498 NVMe-MI: Not Supported 00:07:38.498 Virtualization Management: Not Supported 00:07:38.498 Doorbell Buffer Config: Supported 00:07:38.498 Get LBA Status Capability: Not Supported 00:07:38.498 Command & Feature Lockdown Capability: Not Supported 00:07:38.498 Abort Command Limit: 4 00:07:38.498 Async Event Request Limit: 4 00:07:38.498 Number of Firmware Slots: N/A 00:07:38.498 Firmware Slot 1 Read-Only: N/A 00:07:38.498 Firmware Activation Without Reset: N/A 00:07:38.498 Multiple Update Detection Support: N/A 00:07:38.498 Firmware Update Granularity: No Information Provided 00:07:38.498 Per-Namespace SMART Log: Yes 00:07:38.498 Asymmetric Namespace Access Log Page: Not Supported 00:07:38.498 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:38.498 Command Effects Log Page: Supported 00:07:38.498 Get Log Page Extended Data: Supported 00:07:38.498 Telemetry Log Pages: Not Supported 00:07:38.498 Persistent Event Log Pages: Not Supported 00:07:38.498 Supported Log Pages Log Page: May Support 00:07:38.498 Commands Supported & Effects Log Page: Not Supported 00:07:38.498 Feature Identifiers & Effects Log Page:May Support 00:07:38.498 NVMe-MI Commands & Effects Log Page: May Support 00:07:38.498 Data Area 4 for Telemetry Log: Not Supported 00:07:38.498 Error Log Page Entries Supported: 1 00:07:38.498 Keep Alive: Not Supported 00:07:38.498 00:07:38.498 NVM Command Set Attributes 00:07:38.498 ========================== 00:07:38.498 Submission Queue Entry Size 00:07:38.498 Max: 64 00:07:38.498 Min: 64 00:07:38.498 Completion Queue Entry Size 00:07:38.498 Max: 16 00:07:38.498 Min: 16 00:07:38.498 Number of Namespaces: 256 00:07:38.498 Compare Command: Supported 00:07:38.498 Write Uncorrectable Command: Not Supported 00:07:38.498 Dataset Management Command: Supported 00:07:38.498 Write Zeroes Command: Supported 00:07:38.498 Set Features Save Field: Supported 00:07:38.498 Reservations: Not Supported 00:07:38.498 Timestamp: Supported 00:07:38.498 Copy: Supported 00:07:38.498 Volatile Write Cache: Present 00:07:38.498 Atomic Write Unit (Normal): 1 00:07:38.498 Atomic Write Unit (PFail): 1 00:07:38.498 Atomic Compare & Write Unit: 1 00:07:38.498 Fused Compare & Write: Not Supported 00:07:38.498 Scatter-Gather List 00:07:38.498 SGL Command Set: Supported 00:07:38.498 SGL Keyed: Not Supported 00:07:38.498 SGL Bit Bucket Descriptor: Not Supported 00:07:38.498 SGL Metadata Pointer: Not Supported 00:07:38.498 Oversized SGL: Not Supported 00:07:38.498 SGL Metadata Address: Not Supported 00:07:38.498 SGL Offset: Not Supported 00:07:38.498 Transport SGL Data Block: Not Supported 00:07:38.498 Replay Protected Memory Block: Not Supported 00:07:38.498 00:07:38.498 Firmware Slot Information 00:07:38.498 ========================= 00:07:38.498 Active slot: 1 00:07:38.498 Slot 1 Firmware Revision: 1.0 00:07:38.498 00:07:38.498 00:07:38.498 Commands Supported and Effects 00:07:38.498 ============================== 00:07:38.498 Admin Commands 00:07:38.498 -------------- 00:07:38.498 Delete I/O Submission Queue (00h): Supported 00:07:38.498 Create I/O Submission Queue (01h): Supported 00:07:38.499 Get Log Page (02h): Supported 00:07:38.499 Delete I/O Completion Queue (04h): Supported 00:07:38.499 Create I/O Completion Queue (05h): Supported 00:07:38.499 Identify (06h): Supported 00:07:38.499 Abort (08h): Supported 00:07:38.499 Set Features (09h): Supported 00:07:38.499 Get Features (0Ah): Supported 00:07:38.499 Asynchronous Event Request (0Ch): Supported 00:07:38.499 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:38.499 Directive Send (19h): Supported 00:07:38.499 Directive Receive (1Ah): Supported 00:07:38.499 Virtualization Management (1Ch): Supported 00:07:38.499 Doorbell Buffer Config (7Ch): Supported 00:07:38.499 Format NVM (80h): Supported LBA-Change 00:07:38.499 I/O Commands 00:07:38.499 ------------ 00:07:38.499 Flush (00h): Supported LBA-Change 00:07:38.499 Write (01h): Supported LBA-Change 00:07:38.499 Read (02h): Supported 00:07:38.499 Compare (05h): Supported 00:07:38.499 Write Zeroes (08h): Supported LBA-Change 00:07:38.499 Dataset Management (09h): Supported LBA-Change 00:07:38.499 Unknown (0Ch): Supported 00:07:38.499 Unknown (12h): Supported 00:07:38.499 Copy (19h): Supported LBA-Change 00:07:38.499 Unknown (1Dh): Supported LBA-Change 00:07:38.499 00:07:38.499 Error Log 00:07:38.499 ========= 00:07:38.499 00:07:38.499 Arbitration 00:07:38.499 =========== 00:07:38.499 Arbitration Burst: no limit 00:07:38.499 00:07:38.499 Power Management 00:07:38.499 ================ 00:07:38.499 Number of Power States: 1 00:07:38.499 Current Power State: Power State #0 00:07:38.499 Power State #0: 00:07:38.499 Max Power: 25.00 W 00:07:38.499 Non-Operational State: Operational 00:07:38.499 Entry Latency: 16 microseconds 00:07:38.499 Exit Latency: 4 microseconds 00:07:38.499 Relative Read Throughput: 0 00:07:38.499 Relative Read Latency: 0 00:07:38.499 Relative Write Throughput: 0 00:07:38.499 Relative Write Latency: 0 00:07:38.499 Idle Power: Not Reported 00:07:38.499 Active Power: Not Reported 00:07:38.499 Non-Operational Permissive Mode: Not Supported 00:07:38.499 00:07:38.499 Health Information 00:07:38.499 ================== 00:07:38.499 Critical Warnings: 00:07:38.499 Available Spare Space: OK 00:07:38.499 Temperature: OK 00:07:38.499 Device Reliability: OK 00:07:38.499 Read Only: No 00:07:38.499 Volatile Memory Backup: OK 00:07:38.499 Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.499 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:38.499 Available Spare: 0% 00:07:38.499 Available Spare Threshold: 0% 00:07:38.499 Life Percentage Used: 0% 00:07:38.499 Data Units Read: 1077 00:07:38.499 Data Units Written: 944 00:07:38.499 Host Read Commands: 56416 00:07:38.499 Host Write Commands: 55207 00:07:38.499 Controller Busy Time: 0 minutes 00:07:38.499 Power Cycles: 0 00:07:38.499 Power On Hours: 0 hours 00:07:38.499 Unsafe Shutdowns: 0 00:07:38.499 Unrecoverable Media Errors: 0 00:07:38.499 Lifetime Error Log Entries: 0 00:07:38.499 Warning Temperature Time: 0 minutes 00:07:38.499 Critical Temperature Time: 0 minutes 00:07:38.499 00:07:38.499 Number of Queues 00:07:38.499 ================ 00:07:38.499 Number of I/O Submission Queues: 64 00:07:38.499 Number of I/O Completion Queues: 64 00:07:38.499 00:07:38.499 ZNS Specific Controller Data 00:07:38.499 ============================ 00:07:38.499 Zone Append Size Limit: 0 00:07:38.499 00:07:38.499 00:07:38.499 Active Namespaces 00:07:38.499 ================= 00:07:38.499 Namespace ID:1 00:07:38.499 Error Recovery Timeout: Unlimited 00:07:38.499 Command Set Identifier: NVM (00h) 00:07:38.499 Deallocate: Supported 00:07:38.499 Deallocated/Unwritten Error: Supported 00:07:38.499 Deallocated Read Value: All 0x00 00:07:38.499 Deallocate in Write Zeroes: Not Supported 00:07:38.499 Deallocated Guard Field: 0xFFFF 00:07:38.499 Flush: Supported 00:07:38.499 Reservation: Not Supported 00:07:38.499 Namespace Sharing Capabilities: Private 00:07:38.499 Size (in LBAs): 1310720 (5GiB) 00:07:38.499 Capacity (in LBAs): 1310720 (5GiB) 00:07:38.499 Utilization (in LBAs): 1310720 (5GiB) 00:07:38.499 Thin Provisioning: Not Supported 00:07:38.499 Per-NS Atomic Units: No 00:07:38.499 Maximum Single Source Range Length: 128 00:07:38.499 Maximum Copy Length: 128 00:07:38.499 Maximum Source Range Count: 128 00:07:38.499 NGUID/EUI64 Never Reused: No 00:07:38.499 Namespace Write Protected: No 00:07:38.499 Number of LBA Formats: 8 00:07:38.499 Current LBA Format: LBA Format #04 00:07:38.499 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.499 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.499 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.499 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.499 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.499 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.499 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.499 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.499 00:07:38.499 NVM Specific Namespace Data 00:07:38.499 =========================== 00:07:38.499 Logical Block Storage Tag Mask: 0 00:07:38.499 Protection Information Capabilities: 00:07:38.499 16b Guard Protection Information Storage Tag Support: No 00:07:38.499 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.499 Storage Tag Check Read Support: No 00:07:38.499 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.499 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.499 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.499 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.499 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.499 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.499 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.499 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.499 ===================================================== 00:07:38.499 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:38.499 ===================================================== 00:07:38.499 Controller Capabilities/Features 00:07:38.499 ================================ 00:07:38.499 Vendor ID: 1b36 00:07:38.499 Subsystem Vendor ID: 1af4 00:07:38.499 Serial Number: 12343 00:07:38.499 Model Number: QEMU NVMe Ctrl 00:07:38.499 Firmware Version: 8.0.0 00:07:38.499 Recommended Arb Burst: 6 00:07:38.499 IEEE OUI Identifier: 00 54 52 00:07:38.499 Multi-path I/O 00:07:38.499 May have multiple subsystem ports: No 00:07:38.499 May have multiple controllers: Yes 00:07:38.499 Associated with SR-IOV VF: No 00:07:38.499 Max Data Transfer Size: 524288 00:07:38.499 Max Number of Namespaces: 256 00:07:38.499 Max Number of I/O Queues: 64 00:07:38.499 NVMe Specification Version (VS): 1.4 00:07:38.499 NVMe Specification Version (Identify): 1.4 00:07:38.499 Maximum Queue Entries: 2048 00:07:38.499 Contiguous Queues Required: Yes 00:07:38.499 Arbitration Mechanisms Supported 00:07:38.499 Weighted Round Robin: Not Supported 00:07:38.499 Vendor Specific: Not Supported 00:07:38.499 Reset Timeout: 7500 ms 00:07:38.499 Doorbell Stride: 4 bytes 00:07:38.499 NVM Subsystem Reset: Not Supported 00:07:38.499 Command Sets Supported 00:07:38.499 NVM Command Set: Supported 00:07:38.499 Boot Partition: Not Supported 00:07:38.499 Memory Page Size Minimum: 4096 bytes 00:07:38.499 Memory Page Size Maximum: 65536 bytes 00:07:38.499 Persistent Memory Region: Not Supported 00:07:38.499 Optional Asynchronous Events Supported 00:07:38.499 Namespace Attribute Notices: Supported 00:07:38.499 Firmware Activation Notices: Not Supported 00:07:38.499 ANA Change Notices: Not Supported 00:07:38.499 PLE Aggregate Log Change Notices: Not Supported 00:07:38.499 LBA Status Info Alert Notices: Not Supported 00:07:38.499 EGE Aggregate Log Change Notices: Not Supported 00:07:38.499 Normal NVM Subsystem Shutdown event: Not Supported 00:07:38.499 Zone Descriptor Change Notices: Not Supported 00:07:38.499 Discovery Log Change Notices: Not Supported 00:07:38.499 Controller Attributes 00:07:38.499 128-bit Host Identifier: Not Supported 00:07:38.499 Non-Operational Permissive Mode: Not Supported 00:07:38.499 NVM Sets: Not Supported 00:07:38.499 Read Recovery Levels: Not Supported 00:07:38.499 Endurance Groups: Supported 00:07:38.499 Predictable Latency Mode: Not Supported 00:07:38.499 Traffic Based Keep ALive: Not Supported 00:07:38.499 Namespace Granularity: Not Supported 00:07:38.499 SQ Associations: Not Supported 00:07:38.499 UUID List: Not Supported 00:07:38.499 Multi-Domain Subsystem: Not Supported 00:07:38.499 Fixed Capacity Management: Not Supported 00:07:38.499 Variable Capacity Management: Not Supported 00:07:38.499 Delete Endurance Group: Not Supported 00:07:38.499 Delete NVM Set: Not Supported 00:07:38.499 Extended LBA Formats Supported: Supported 00:07:38.499 Flexible Data Placement Supported: Supported 00:07:38.500 00:07:38.500 Controller Memory Buffer Support 00:07:38.500 ================================ 00:07:38.500 Supported: No 00:07:38.500 00:07:38.500 Persistent Memory Region Support 00:07:38.500 ================================ 00:07:38.500 Supported: No 00:07:38.500 00:07:38.500 Admin Command Set Attributes 00:07:38.500 ============================ 00:07:38.500 Security Send/Receive: Not Supported 00:07:38.500 Format NVM: Supported 00:07:38.500 Firmware Activate/Download: Not Supported 00:07:38.500 Namespace Management: Supported 00:07:38.500 Device Self-Test: Not Supported 00:07:38.500 Directives: Supported 00:07:38.500 NVMe-MI: Not Supported 00:07:38.500 Virtualization Management: Not Supported 00:07:38.500 Doorbell Buffer Config: Supported 00:07:38.500 Get LBA Status Capability: Not Supported 00:07:38.500 Command & Feature Lockdown Capability: Not Supported 00:07:38.500 Abort Command Limit: 4 00:07:38.500 Async Event Request Limit: 4 00:07:38.500 Number of Firmware Slots: N/A 00:07:38.500 Firmware Slot 1 Read-Only: N/A 00:07:38.500 Firmware Activation Without Reset: N/A 00:07:38.500 Multiple Update Detection Support: N/A 00:07:38.500 Firmware Update Granularity: No Information Provided 00:07:38.500 Per-Namespace SMART Log: Yes 00:07:38.500 Asymmetric Namespace Access Log Page: Not Supported 00:07:38.500 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:38.500 Command Effects Log Page: Supported 00:07:38.500 Get Log Page Extended Data: Supported 00:07:38.500 Telemetry Log Pages: Not Supported 00:07:38.500 Persistent Event Log Pages: Not Supported 00:07:38.500 Supported Log Pages Log Page: May Support 00:07:38.500 Commands Supported & Effects Log Page: Not Supported 00:07:38.500 Feature Identifiers & Effects Log Page:May Support 00:07:38.500 NVMe-MI Commands & Effects Log Page: May Support 00:07:38.500 Data Area 4 for Telemetry Log: Not Supported 00:07:38.500 Error Log Page Entries Supported: 1 00:07:38.500 Keep Alive: Not Supported 00:07:38.500 00:07:38.500 NVM Command Set Attributes 00:07:38.500 ========================== 00:07:38.500 Submission Queue Entry Size 00:07:38.500 Max: 64 00:07:38.500 Min: 64 00:07:38.500 Completion Queue Entry Size 00:07:38.500 Max: 16 00:07:38.500 Min: 16 00:07:38.500 Number of Namespaces: 256 00:07:38.500 Compare Command: Supported 00:07:38.500 Write Uncorrectable Command: Not Supported 00:07:38.500 Dataset Management Command: Supported 00:07:38.500 Write Zeroes Command: Supported 00:07:38.500 Set Features Save Field: Supported 00:07:38.500 Reservations: Not Supported 00:07:38.500 Timestamp: Supported 00:07:38.500 Copy: Supported 00:07:38.500 Volatile Write Cache: Present 00:07:38.500 Atomic Write Unit (Normal): 1 00:07:38.500 Atomic Write Unit (PFail): 1 00:07:38.500 Atomic Compare & Write Unit: 1 00:07:38.500 Fused Compare & Write: Not Supported 00:07:38.500 Scatter-Gather List 00:07:38.500 SGL Command Set: Supported 00:07:38.500 SGL Keyed: Not Supported 00:07:38.500 SGL Bit Bucket Descriptor: Not Supported 00:07:38.500 SGL Metadata Pointer: Not Supported 00:07:38.500 Oversized SGL: Not Supported 00:07:38.500 SGL Metadata Address: Not Supported 00:07:38.500 SGL Offset: Not Supported 00:07:38.500 Transport SGL Data Block: Not Supported 00:07:38.500 Replay Protected Memory Block: Not Supported 00:07:38.500 00:07:38.500 Firmware Slot Information 00:07:38.500 ========================= 00:07:38.500 Active slot: 1 00:07:38.500 Slot 1 Firmware Revision: 1.0 00:07:38.500 00:07:38.500 00:07:38.500 Commands Supported and Effects 00:07:38.500 ============================== 00:07:38.500 Admin Commands 00:07:38.500 -------------- 00:07:38.500 Delete I/O Submission Queue (00h): Supported 00:07:38.500 Create I/O Submission Queue (01h): Supported 00:07:38.500 Get Log Page (02h): Supported 00:07:38.500 Delete I/O Completion Queue (04h): Supported 00:07:38.500 Create I/O Completion Queue (05h): Supported 00:07:38.500 Identify (06h): Supported 00:07:38.500 Abort (08h): Supported 00:07:38.500 Set Features (09h): Supported 00:07:38.500 Get Features (0Ah): Supported 00:07:38.500 Asynchronous Event Request (0Ch): Supported 00:07:38.500 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:38.500 Directive Send (19h): Supported 00:07:38.500 Directive Receive (1Ah): Supported 00:07:38.500 Virtualization Management (1Ch): Supported 00:07:38.500 Doorbell Buffer Config (7Ch): Supported 00:07:38.500 Format NVM (80h): Supported LBA-Change 00:07:38.500 I/O Commands 00:07:38.500 ------------ 00:07:38.500 Flush (00h): Supported LBA-Change 00:07:38.500 Write (01h): Supported LBA-Change 00:07:38.500 Read (02h): Supported 00:07:38.500 Compare (05h): Supported 00:07:38.500 Write Zeroes (08h): Supported LBA-Change 00:07:38.500 Dataset Management (09h): Supported LBA-Change 00:07:38.500 Unknown (0Ch): Supported 00:07:38.500 Unknown (12h): Supported 00:07:38.500 Copy (19h): Supported LBA-Change 00:07:38.500 Unknown (1Dh): Supported LBA-Change 00:07:38.500 00:07:38.500 Error Log 00:07:38.500 ========= 00:07:38.500 00:07:38.500 Arbitration 00:07:38.500 =========== 00:07:38.500 Arbitration Burst: no limit 00:07:38.500 00:07:38.500 Power Management 00:07:38.500 ================ 00:07:38.500 Number of Power States: 1 00:07:38.500 Current Power State: Power State #0 00:07:38.500 Power State #0: 00:07:38.500 Max Power: 25.00 W 00:07:38.500 Non-Operational State: Operational 00:07:38.500 Entry Latency: 16 microseconds 00:07:38.500 Exit Latency: 4 microseconds 00:07:38.500 Relative Read Throughput: 0 00:07:38.500 Relative Read Latency: 0 00:07:38.500 Relative Write Throughput: 0 00:07:38.500 Relative Write Latency: 0 00:07:38.500 Idle Power: Not Reported 00:07:38.500 Active Power: Not Reported 00:07:38.500 Non-Operational Permissive Mode: Not Supported 00:07:38.500 00:07:38.500 Health Information 00:07:38.500 ================== 00:07:38.500 Critical Warnings: 00:07:38.500 Available Spare Space: OK 00:07:38.500 Temperature: OK 00:07:38.500 Device Reliability: OK 00:07:38.500 Read Only: No 00:07:38.500 Volatile Memory Backup: OK 00:07:38.500 Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.500 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:38.500 Available Spare: 0% 00:07:38.500 Available Spare Threshold: 0% 00:07:38.500 Life Percentage Used: 0% 00:07:38.500 Data Units Read: 787 00:07:38.500 Data Units Written: 716 00:07:38.500 Host Read Commands: 39284 00:07:38.500 Host Write Commands: 38707 00:07:38.500 Controller Busy Time: 0 minutes 00:07:38.500 Power Cycles: 0 00:07:38.500 Power On Hours: 0 hours 00:07:38.500 Unsafe Shutdowns: 0 00:07:38.500 Unrecoverable Media Errors: 0 00:07:38.500 Lifetime Error Log Entries: 0 00:07:38.500 Warning Temperature Time: 0 minutes 00:07:38.500 Critical Temperature Time: 0 minutes 00:07:38.500 00:07:38.500 Number of Queues 00:07:38.500 ================ 00:07:38.500 Number of I/O Submission Queues: 64 00:07:38.500 Number of I/O Completion Queues: 64 00:07:38.500 00:07:38.500 ZNS Specific Controller Data 00:07:38.500 ============================ 00:07:38.500 Zone Append Size Limit: 0 00:07:38.500 00:07:38.500 00:07:38.500 Active Namespaces 00:07:38.500 ================= 00:07:38.500 Namespace ID:1 00:07:38.500 Error Recovery Timeout: Unlimited 00:07:38.500 Command Set Identifier: NVM (00h) 00:07:38.500 Deallocate: Supported 00:07:38.500 Deallocated/Unwritten Error: Supported 00:07:38.500 Deallocated Read Value: All 0x00 00:07:38.500 Deallocate in Write Zeroes: Not Supported 00:07:38.500 Deallocated Guard Field: 0xFFFF 00:07:38.500 Flush: Supported 00:07:38.500 Reservation: Not Supported 00:07:38.500 Namespace Sharing Capabilities: Multiple Controllers 00:07:38.501 Size (in LBAs): 262144 (1GiB) 00:07:38.501 Capacity (in LBAs): 262144 (1GiB) 00:07:38.501 Utilization (in LBAs): 262144 (1GiB) 00:07:38.501 Thin Provisioning: Not Supported 00:07:38.501 Per-NS Atomic Units: No 00:07:38.501 Maximum Single Source Range Length: 128 00:07:38.501 Maximum Copy Length: 128 00:07:38.501 Maximum Source Range Count: 128 00:07:38.501 NGUID/EUI64 Never Reused: No 00:07:38.501 Namespace Write Protected: No 00:07:38.501 Endurance group ID: 1 00:07:38.501 Number of LBA Formats: 8 00:07:38.501 Current LBA Format: LBA Format #04 00:07:38.501 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.501 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.501 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.501 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.501 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.501 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.501 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.501 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.501 00:07:38.501 Get Feature FDP: 00:07:38.501 ================ 00:07:38.501 Enabled: Yes 00:07:38.501 FDP configuration index: 0 00:07:38.501 00:07:38.501 FDP configurations log page 00:07:38.501 =========================== 00:07:38.501 Number of FDP configurations: 1 00:07:38.501 Version: 0 00:07:38.501 Size: 112 00:07:38.501 FDP Configuration Descriptor: 0 00:07:38.501 Descriptor Size: 96 00:07:38.501 Reclaim Group Identifier format: 2 00:07:38.501 FDP Volatile Write Cache: Not Present 00:07:38.501 FDP Configuration: Valid 00:07:38.501 Vendor Specific Size: 0 00:07:38.501 Number of Reclaim Groups: 2 00:07:38.501 Number of Recalim Unit Handles: 8 00:07:38.501 Max Placement Identifiers: 128 00:07:38.501 Number of Namespaces Suppprted: 256 00:07:38.501 Reclaim unit Nominal Size: 6000000 bytes 00:07:38.501 Estimated Reclaim Unit Time Limit: Not Reported 00:07:38.501 RUH Desc #000: RUH Type: Initially Isolated 00:07:38.501 RUH Desc #001: RUH Type: Initially Isolated 00:07:38.501 RUH Desc #002: RUH Type: Initially Isolated 00:07:38.501 RUH Desc #003: RUH Type: Initially Isolated 00:07:38.501 RUH Desc #004: RUH Type: Initially Isolated 00:07:38.501 RUH Desc #005: RUH Type: Initially Isolated 00:07:38.501 RUH Desc #006: RUH Type: Initially Isolated 00:07:38.501 RUH Desc #007: RUH Type: Initially Isolated 00:07:38.501 00:07:38.501 FDP reclaim unit handle usage log page 00:07:38.501 ====================================== 00:07:38.501 Number of Reclaim Unit Handles: 8 00:07:38.501 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:38.501 RUH Usage Desc #001: RUH Attributes: Unused 00:07:38.501 RUH Usage Desc #002: RUH Attributes: Unused 00:07:38.501 RUH Usage Desc #003: RUH Attributes: Unused 00:07:38.501 RUH Usage Desc #004: RUH Attributes: Unused 00:07:38.501 R[2024-11-21 01:33:22.402461] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62749 terminated unexpected 00:07:38.501 [2024-11-21 01:33:22.404293] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62749 terminated unexpected 00:07:38.501 UH Usage Desc #005: RUH Attributes: Unused 00:07:38.501 RUH Usage Desc #006: RUH Attributes: Unused 00:07:38.501 RUH Usage Desc #007: RUH Attributes: Unused 00:07:38.501 00:07:38.501 FDP statistics log page 00:07:38.501 ======================= 00:07:38.501 Host bytes with metadata written: 466395136 00:07:38.501 Media bytes with metadata written: 466448384 00:07:38.501 Media bytes erased: 0 00:07:38.501 00:07:38.501 FDP events log page 00:07:38.501 =================== 00:07:38.501 Number of FDP events: 0 00:07:38.501 00:07:38.501 NVM Specific Namespace Data 00:07:38.501 =========================== 00:07:38.501 Logical Block Storage Tag Mask: 0 00:07:38.501 Protection Information Capabilities: 00:07:38.501 16b Guard Protection Information Storage Tag Support: No 00:07:38.501 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.501 Storage Tag Check Read Support: No 00:07:38.501 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.501 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.501 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.501 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.501 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.501 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.501 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.501 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.501 ===================================================== 00:07:38.501 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:38.501 ===================================================== 00:07:38.501 Controller Capabilities/Features 00:07:38.501 ================================ 00:07:38.501 Vendor ID: 1b36 00:07:38.501 Subsystem Vendor ID: 1af4 00:07:38.501 Serial Number: 12340 00:07:38.501 Model Number: QEMU NVMe Ctrl 00:07:38.501 Firmware Version: 8.0.0 00:07:38.501 Recommended Arb Burst: 6 00:07:38.501 IEEE OUI Identifier: 00 54 52 00:07:38.501 Multi-path I/O 00:07:38.501 May have multiple subsystem ports: No 00:07:38.501 May have multiple controllers: No 00:07:38.501 Associated with SR-IOV VF: No 00:07:38.501 Max Data Transfer Size: 524288 00:07:38.501 Max Number of Namespaces: 256 00:07:38.501 Max Number of I/O Queues: 64 00:07:38.501 NVMe Specification Version (VS): 1.4 00:07:38.501 NVMe Specification Version (Identify): 1.4 00:07:38.501 Maximum Queue Entries: 2048 00:07:38.501 Contiguous Queues Required: Yes 00:07:38.501 Arbitration Mechanisms Supported 00:07:38.501 Weighted Round Robin: Not Supported 00:07:38.501 Vendor Specific: Not Supported 00:07:38.501 Reset Timeout: 7500 ms 00:07:38.501 Doorbell Stride: 4 bytes 00:07:38.501 NVM Subsystem Reset: Not Supported 00:07:38.501 Command Sets Supported 00:07:38.501 NVM Command Set: Supported 00:07:38.501 Boot Partition: Not Supported 00:07:38.501 Memory Page Size Minimum: 4096 bytes 00:07:38.501 Memory Page Size Maximum: 65536 bytes 00:07:38.501 Persistent Memory Region: Not Supported 00:07:38.501 Optional Asynchronous Events Supported 00:07:38.501 Namespace Attribute Notices: Supported 00:07:38.501 Firmware Activation Notices: Not Supported 00:07:38.501 ANA Change Notices: Not Supported 00:07:38.501 PLE Aggregate Log Change Notices: Not Supported 00:07:38.501 LBA Status Info Alert Notices: Not Supported 00:07:38.501 EGE Aggregate Log Change Notices: Not Supported 00:07:38.501 Normal NVM Subsystem Shutdown event: Not Supported 00:07:38.501 Zone Descriptor Change Notices: Not Supported 00:07:38.501 Discovery Log Change Notices: Not Supported 00:07:38.501 Controller Attributes 00:07:38.501 128-bit Host Identifier: Not Supported 00:07:38.501 Non-Operational Permissive Mode: Not Supported 00:07:38.501 NVM Sets: Not Supported 00:07:38.501 Read Recovery Levels: Not Supported 00:07:38.501 Endurance Groups: Not Supported 00:07:38.501 Predictable Latency Mode: Not Supported 00:07:38.501 Traffic Based Keep ALive: Not Supported 00:07:38.501 Namespace Granularity: Not Supported 00:07:38.501 SQ Associations: Not Supported 00:07:38.501 UUID List: Not Supported 00:07:38.501 Multi-Domain Subsystem: Not Supported 00:07:38.501 Fixed Capacity Management: Not Supported 00:07:38.501 Variable Capacity Management: Not Supported 00:07:38.501 Delete Endurance Group: Not Supported 00:07:38.501 Delete NVM Set: Not Supported 00:07:38.501 Extended LBA Formats Supported: Supported 00:07:38.501 Flexible Data Placement Supported: Not Supported 00:07:38.501 00:07:38.501 Controller Memory Buffer Support 00:07:38.501 ================================ 00:07:38.501 Supported: No 00:07:38.501 00:07:38.501 Persistent Memory Region Support 00:07:38.501 ================================ 00:07:38.501 Supported: No 00:07:38.501 00:07:38.501 Admin Command Set Attributes 00:07:38.501 ============================ 00:07:38.501 Security Send/Receive: Not Supported 00:07:38.501 Format NVM: Supported 00:07:38.501 Firmware Activate/Download: Not Supported 00:07:38.501 Namespace Management: Supported 00:07:38.501 Device Self-Test: Not Supported 00:07:38.501 Directives: Supported 00:07:38.501 NVMe-MI: Not Supported 00:07:38.501 Virtualization Management: Not Supported 00:07:38.501 Doorbell Buffer Config: Supported 00:07:38.501 Get LBA Status Capability: Not Supported 00:07:38.501 Command & Feature Lockdown Capability: Not Supported 00:07:38.501 Abort Command Limit: 4 00:07:38.501 Async Event Request Limit: 4 00:07:38.501 Number of Firmware Slots: N/A 00:07:38.501 Firmware Slot 1 Read-Only: N/A 00:07:38.501 Firmware Activation Without Reset: N/A 00:07:38.501 Multiple Update Detection Support: N/A 00:07:38.501 Firmware Update Granularity: No Information Provided 00:07:38.501 Per-Namespace SMART Log: Yes 00:07:38.501 Asymmetric Namespace Access Log Page: Not Supported 00:07:38.501 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:38.501 Command Effects Log Page: Supported 00:07:38.501 Get Log Page Extended Data: Supported 00:07:38.502 Telemetry Log Pages: Not Supported 00:07:38.502 Persistent Event Log Pages: Not Supported 00:07:38.502 Supported Log Pages Log Page: May Support 00:07:38.502 Commands Supported & Effects Log Page: Not Supported 00:07:38.502 Feature Identifiers & Effects Log Page:May Support 00:07:38.502 NVMe-MI Commands & Effects Log Page: May Support 00:07:38.502 Data Area 4 for Telemetry Log: Not Supported 00:07:38.502 Error Log Page Entries Supported: 1 00:07:38.502 Keep Alive: Not Supported 00:07:38.502 00:07:38.502 NVM Command Set Attributes 00:07:38.502 ========================== 00:07:38.502 Submission Queue Entry Size 00:07:38.502 Max: 64 00:07:38.502 Min: 64 00:07:38.502 Completion Queue Entry Size 00:07:38.502 Max: 16 00:07:38.502 Min: 16 00:07:38.502 Number of Namespaces: 256 00:07:38.502 Compare Command: Supported 00:07:38.502 Write Uncorrectable Command: Not Supported 00:07:38.502 Dataset Management Command: Supported 00:07:38.502 Write Zeroes Command: Supported 00:07:38.502 Set Features Save Field: Supported 00:07:38.502 Reservations: Not Supported 00:07:38.502 Timestamp: Supported 00:07:38.502 Copy: Supported 00:07:38.502 Volatile Write Cache: Present 00:07:38.502 Atomic Write Unit (Normal): 1 00:07:38.502 Atomic Write Unit (PFail): 1 00:07:38.502 Atomic Compare & Write Unit: 1 00:07:38.502 Fused Compare & Write: Not Supported 00:07:38.502 Scatter-Gather List 00:07:38.502 SGL Command Set: Supported 00:07:38.502 SGL Keyed: Not Supported 00:07:38.502 SGL Bit Bucket Descriptor: Not Supported 00:07:38.502 SGL Metadata Pointer: Not Supported 00:07:38.502 Oversized SGL: Not Supported 00:07:38.502 SGL Metadata Address: Not Supported 00:07:38.502 SGL Offset: Not Supported 00:07:38.502 Transport SGL Data Block: Not Supported 00:07:38.502 Replay Protected Memory Block: Not Supported 00:07:38.502 00:07:38.502 Firmware Slot Information 00:07:38.502 ========================= 00:07:38.502 Active slot: 1 00:07:38.502 Slot 1 Firmware Revision: 1.0 00:07:38.502 00:07:38.502 00:07:38.502 Commands Supported and Effects 00:07:38.502 ============================== 00:07:38.502 Admin Commands 00:07:38.502 -------------- 00:07:38.502 Delete I/O Submission Queue (00h): Supported 00:07:38.502 Create I/O Submission Queue (01h): Supported 00:07:38.502 Get Log Page (02h): Supported 00:07:38.502 Delete I/O Completion Queue (04h): Supported 00:07:38.502 Create I/O Completion Queue (05h): Supported 00:07:38.502 Identify (06h): Supported 00:07:38.502 Abort (08h): Supported 00:07:38.502 Set Features (09h): Supported 00:07:38.502 Get Features (0Ah): Supported 00:07:38.502 Asynchronous Event Request (0Ch): Supported 00:07:38.502 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:38.502 Directive Send (19h): Supported 00:07:38.502 Directive Receive (1Ah): Supported 00:07:38.502 Virtualization Management (1Ch): Supported 00:07:38.502 Doorbell Buffer Config (7Ch): Supported 00:07:38.502 Format NVM (80h): Supported LBA-Change 00:07:38.502 I/O Commands 00:07:38.502 ------------ 00:07:38.502 Flush (00h): Supported LBA-Change 00:07:38.502 Write (01h): Supported LBA-Change 00:07:38.502 Read (02h): Supported 00:07:38.502 Compare (05h): Supported 00:07:38.502 Write Zeroes (08h): Supported LBA-Change 00:07:38.502 Dataset Management (09h): Supported LBA-Change 00:07:38.502 Unknown (0Ch): Supported 00:07:38.502 Unknown (12h): Supported 00:07:38.502 Copy (19h): Supported LBA-Change 00:07:38.502 Unknown (1Dh): Supported LBA-Change 00:07:38.502 00:07:38.502 Error Log 00:07:38.502 ========= 00:07:38.502 00:07:38.502 Arbitration 00:07:38.502 =========== 00:07:38.502 Arbitration Burst: no limit 00:07:38.502 00:07:38.502 Power Management 00:07:38.502 ================ 00:07:38.502 Number of Power States: 1 00:07:38.502 Current Power State: Power State #0 00:07:38.502 Power State #0: 00:07:38.502 Max Power: 25.00 W 00:07:38.502 Non-Operational State: Operational 00:07:38.502 Entry Latency: 16 microseconds 00:07:38.502 Exit Latency: 4 microseconds 00:07:38.502 Relative Read Throughput: 0 00:07:38.502 Relative Read Latency: 0 00:07:38.502 Relative Write Throughput: 0 00:07:38.502 Relative Write Latency: 0 00:07:38.502 Idle Power: Not Reported 00:07:38.502 Active Power: Not Reported 00:07:38.502 Non-Operational Permissive Mode: Not Supported 00:07:38.502 00:07:38.502 Health Information 00:07:38.502 ================== 00:07:38.502 Critical Warnings: 00:07:38.502 Available Spare Space: OK 00:07:38.502 Temperature: OK 00:07:38.502 Device Reliability: OK 00:07:38.502 Read Only: No 00:07:38.502 Volatile Memory Backup: OK 00:07:38.502 Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.502 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:38.502 Available Spare: 0% 00:07:38.502 Available Spare Threshold: 0% 00:07:38.502 Life Percentage Used: 0% 00:07:38.502 Data Units Read: 697 00:07:38.502 Data Units Written: 625 00:07:38.502 Host Read Commands: 38045 00:07:38.502 Host Write Commands: 37831 00:07:38.502 Controller Busy Time: 0 minutes 00:07:38.502 Power Cycles: 0 00:07:38.502 Power On Hours: 0 hours 00:07:38.502 Unsafe Shutdowns: 0 00:07:38.502 Unrecoverable Media Errors: 0 00:07:38.502 Lifetime Error Log Entries: 0 00:07:38.502 Warning Temperature Time: 0 minutes 00:07:38.502 Critical Temperature Time: 0 minutes 00:07:38.502 00:07:38.502 Number of Queues 00:07:38.502 ================ 00:07:38.502 Number of I/O Submission Queues: 64 00:07:38.502 Number of I/O Completion Queues: 64 00:07:38.502 00:07:38.502 ZNS Specific Controller Data 00:07:38.502 ============================ 00:07:38.502 Zone Append Size Limit: 0 00:07:38.502 00:07:38.502 00:07:38.502 Active Namespaces 00:07:38.502 ================= 00:07:38.502 Namespace ID:1 00:07:38.502 Error Recovery Timeout: Unlimited 00:07:38.502 Command Set Identifier: NVM (00h) 00:07:38.502 Deallocate: Supported 00:07:38.502 Deallocated/Unwritten Error: Supported 00:07:38.502 Deallocated Read Value: All 0x00 00:07:38.502 Deallocate in Write Zeroes: Not Supported 00:07:38.502 Deallocated Guard Field: 0xFFFF 00:07:38.502 Flush: Supported 00:07:38.502 Reservation: Not Supported 00:07:38.502 Metadata Transferred as: Separate Metadata Buffer 00:07:38.502 Namespace Sharing Capabilities: Private 00:07:38.502 Size (in LBAs): 1548666 (5GiB) 00:07:38.502 Capacity (in LBAs): 1548666 (5GiB) 00:07:38.502 Utilization (in LBAs): 1548666 (5GiB) 00:07:38.502 Thin Provisioning: Not Supported 00:07:38.502 Per-NS Atomic Units: No 00:07:38.502 Maximum Single Source Range Length: 128 00:07:38.502 Maximum Copy Length: 128 00:07:38.502 Maximum Source Range Count: 128 00:07:38.502 NGUID/EUI64 Never Reused: No 00:07:38.502 Namespace Write Protected: No 00:07:38.502 Number of LBA Formats: 8 00:07:38.502 Current LBA Format: LBA Format #07 00:07:38.502 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.502 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.502 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.502 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.502 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.502 LBA Forma[2024-11-21 01:33:22.406571] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62749 terminated unexpected 00:07:38.502 t #05: Data Size: 4096 Metadata Size: 8 00:07:38.502 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.502 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.502 00:07:38.502 NVM Specific Namespace Data 00:07:38.502 =========================== 00:07:38.502 Logical Block Storage Tag Mask: 0 00:07:38.502 Protection Information Capabilities: 00:07:38.502 16b Guard Protection Information Storage Tag Support: No 00:07:38.502 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.502 Storage Tag Check Read Support: No 00:07:38.502 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.502 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.502 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.502 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.502 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.503 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.503 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.503 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.503 ===================================================== 00:07:38.503 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:38.503 ===================================================== 00:07:38.503 Controller Capabilities/Features 00:07:38.503 ================================ 00:07:38.503 Vendor ID: 1b36 00:07:38.503 Subsystem Vendor ID: 1af4 00:07:38.503 Serial Number: 12342 00:07:38.503 Model Number: QEMU NVMe Ctrl 00:07:38.503 Firmware Version: 8.0.0 00:07:38.503 Recommended Arb Burst: 6 00:07:38.503 IEEE OUI Identifier: 00 54 52 00:07:38.503 Multi-path I/O 00:07:38.503 May have multiple subsystem ports: No 00:07:38.503 May have multiple controllers: No 00:07:38.503 Associated with SR-IOV VF: No 00:07:38.503 Max Data Transfer Size: 524288 00:07:38.503 Max Number of Namespaces: 256 00:07:38.503 Max Number of I/O Queues: 64 00:07:38.503 NVMe Specification Version (VS): 1.4 00:07:38.503 NVMe Specification Version (Identify): 1.4 00:07:38.503 Maximum Queue Entries: 2048 00:07:38.503 Contiguous Queues Required: Yes 00:07:38.503 Arbitration Mechanisms Supported 00:07:38.503 Weighted Round Robin: Not Supported 00:07:38.503 Vendor Specific: Not Supported 00:07:38.503 Reset Timeout: 7500 ms 00:07:38.503 Doorbell Stride: 4 bytes 00:07:38.503 NVM Subsystem Reset: Not Supported 00:07:38.503 Command Sets Supported 00:07:38.503 NVM Command Set: Supported 00:07:38.503 Boot Partition: Not Supported 00:07:38.503 Memory Page Size Minimum: 4096 bytes 00:07:38.503 Memory Page Size Maximum: 65536 bytes 00:07:38.503 Persistent Memory Region: Not Supported 00:07:38.503 Optional Asynchronous Events Supported 00:07:38.503 Namespace Attribute Notices: Supported 00:07:38.503 Firmware Activation Notices: Not Supported 00:07:38.503 ANA Change Notices: Not Supported 00:07:38.503 PLE Aggregate Log Change Notices: Not Supported 00:07:38.503 LBA Status Info Alert Notices: Not Supported 00:07:38.503 EGE Aggregate Log Change Notices: Not Supported 00:07:38.503 Normal NVM Subsystem Shutdown event: Not Supported 00:07:38.503 Zone Descriptor Change Notices: Not Supported 00:07:38.503 Discovery Log Change Notices: Not Supported 00:07:38.503 Controller Attributes 00:07:38.503 128-bit Host Identifier: Not Supported 00:07:38.503 Non-Operational Permissive Mode: Not Supported 00:07:38.503 NVM Sets: Not Supported 00:07:38.503 Read Recovery Levels: Not Supported 00:07:38.503 Endurance Groups: Not Supported 00:07:38.503 Predictable Latency Mode: Not Supported 00:07:38.503 Traffic Based Keep ALive: Not Supported 00:07:38.503 Namespace Granularity: Not Supported 00:07:38.503 SQ Associations: Not Supported 00:07:38.503 UUID List: Not Supported 00:07:38.503 Multi-Domain Subsystem: Not Supported 00:07:38.503 Fixed Capacity Management: Not Supported 00:07:38.503 Variable Capacity Management: Not Supported 00:07:38.503 Delete Endurance Group: Not Supported 00:07:38.503 Delete NVM Set: Not Supported 00:07:38.503 Extended LBA Formats Supported: Supported 00:07:38.503 Flexible Data Placement Supported: Not Supported 00:07:38.503 00:07:38.503 Controller Memory Buffer Support 00:07:38.503 ================================ 00:07:38.503 Supported: No 00:07:38.503 00:07:38.503 Persistent Memory Region Support 00:07:38.503 ================================ 00:07:38.503 Supported: No 00:07:38.503 00:07:38.503 Admin Command Set Attributes 00:07:38.503 ============================ 00:07:38.503 Security Send/Receive: Not Supported 00:07:38.503 Format NVM: Supported 00:07:38.503 Firmware Activate/Download: Not Supported 00:07:38.503 Namespace Management: Supported 00:07:38.503 Device Self-Test: Not Supported 00:07:38.503 Directives: Supported 00:07:38.503 NVMe-MI: Not Supported 00:07:38.503 Virtualization Management: Not Supported 00:07:38.503 Doorbell Buffer Config: Supported 00:07:38.503 Get LBA Status Capability: Not Supported 00:07:38.503 Command & Feature Lockdown Capability: Not Supported 00:07:38.503 Abort Command Limit: 4 00:07:38.503 Async Event Request Limit: 4 00:07:38.503 Number of Firmware Slots: N/A 00:07:38.503 Firmware Slot 1 Read-Only: N/A 00:07:38.503 Firmware Activation Without Reset: N/A 00:07:38.503 Multiple Update Detection Support: N/A 00:07:38.503 Firmware Update Granularity: No Information Provided 00:07:38.503 Per-Namespace SMART Log: Yes 00:07:38.503 Asymmetric Namespace Access Log Page: Not Supported 00:07:38.503 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:38.503 Command Effects Log Page: Supported 00:07:38.503 Get Log Page Extended Data: Supported 00:07:38.503 Telemetry Log Pages: Not Supported 00:07:38.503 Persistent Event Log Pages: Not Supported 00:07:38.503 Supported Log Pages Log Page: May Support 00:07:38.503 Commands Supported & Effects Log Page: Not Supported 00:07:38.503 Feature Identifiers & Effects Log Page:May Support 00:07:38.503 NVMe-MI Commands & Effects Log Page: May Support 00:07:38.503 Data Area 4 for Telemetry Log: Not Supported 00:07:38.503 Error Log Page Entries Supported: 1 00:07:38.503 Keep Alive: Not Supported 00:07:38.503 00:07:38.503 NVM Command Set Attributes 00:07:38.503 ========================== 00:07:38.503 Submission Queue Entry Size 00:07:38.503 Max: 64 00:07:38.503 Min: 64 00:07:38.503 Completion Queue Entry Size 00:07:38.503 Max: 16 00:07:38.503 Min: 16 00:07:38.503 Number of Namespaces: 256 00:07:38.503 Compare Command: Supported 00:07:38.503 Write Uncorrectable Command: Not Supported 00:07:38.503 Dataset Management Command: Supported 00:07:38.503 Write Zeroes Command: Supported 00:07:38.503 Set Features Save Field: Supported 00:07:38.503 Reservations: Not Supported 00:07:38.503 Timestamp: Supported 00:07:38.503 Copy: Supported 00:07:38.503 Volatile Write Cache: Present 00:07:38.503 Atomic Write Unit (Normal): 1 00:07:38.503 Atomic Write Unit (PFail): 1 00:07:38.503 Atomic Compare & Write Unit: 1 00:07:38.503 Fused Compare & Write: Not Supported 00:07:38.503 Scatter-Gather List 00:07:38.503 SGL Command Set: Supported 00:07:38.503 SGL Keyed: Not Supported 00:07:38.503 SGL Bit Bucket Descriptor: Not Supported 00:07:38.503 SGL Metadata Pointer: Not Supported 00:07:38.503 Oversized SGL: Not Supported 00:07:38.503 SGL Metadata Address: Not Supported 00:07:38.503 SGL Offset: Not Supported 00:07:38.503 Transport SGL Data Block: Not Supported 00:07:38.503 Replay Protected Memory Block: Not Supported 00:07:38.503 00:07:38.503 Firmware Slot Information 00:07:38.503 ========================= 00:07:38.503 Active slot: 1 00:07:38.503 Slot 1 Firmware Revision: 1.0 00:07:38.503 00:07:38.503 00:07:38.503 Commands Supported and Effects 00:07:38.503 ============================== 00:07:38.503 Admin Commands 00:07:38.503 -------------- 00:07:38.503 Delete I/O Submission Queue (00h): Supported 00:07:38.503 Create I/O Submission Queue (01h): Supported 00:07:38.503 Get Log Page (02h): Supported 00:07:38.503 Delete I/O Completion Queue (04h): Supported 00:07:38.503 Create I/O Completion Queue (05h): Supported 00:07:38.503 Identify (06h): Supported 00:07:38.503 Abort (08h): Supported 00:07:38.503 Set Features (09h): Supported 00:07:38.503 Get Features (0Ah): Supported 00:07:38.503 Asynchronous Event Request (0Ch): Supported 00:07:38.503 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:38.503 Directive Send (19h): Supported 00:07:38.503 Directive Receive (1Ah): Supported 00:07:38.503 Virtualization Management (1Ch): Supported 00:07:38.504 Doorbell Buffer Config (7Ch): Supported 00:07:38.504 Format NVM (80h): Supported LBA-Change 00:07:38.504 I/O Commands 00:07:38.504 ------------ 00:07:38.504 Flush (00h): Supported LBA-Change 00:07:38.504 Write (01h): Supported LBA-Change 00:07:38.504 Read (02h): Supported 00:07:38.504 Compare (05h): Supported 00:07:38.504 Write Zeroes (08h): Supported LBA-Change 00:07:38.504 Dataset Management (09h): Supported LBA-Change 00:07:38.504 Unknown (0Ch): Supported 00:07:38.504 Unknown (12h): Supported 00:07:38.504 Copy (19h): Supported LBA-Change 00:07:38.504 Unknown (1Dh): Supported LBA-Change 00:07:38.504 00:07:38.504 Error Log 00:07:38.504 ========= 00:07:38.504 00:07:38.504 Arbitration 00:07:38.504 =========== 00:07:38.504 Arbitration Burst: no limit 00:07:38.504 00:07:38.504 Power Management 00:07:38.504 ================ 00:07:38.504 Number of Power States: 1 00:07:38.504 Current Power State: Power State #0 00:07:38.504 Power State #0: 00:07:38.504 Max Power: 25.00 W 00:07:38.504 Non-Operational State: Operational 00:07:38.504 Entry Latency: 16 microseconds 00:07:38.504 Exit Latency: 4 microseconds 00:07:38.504 Relative Read Throughput: 0 00:07:38.504 Relative Read Latency: 0 00:07:38.504 Relative Write Throughput: 0 00:07:38.504 Relative Write Latency: 0 00:07:38.504 Idle Power: Not Reported 00:07:38.504 Active Power: Not Reported 00:07:38.504 Non-Operational Permissive Mode: Not Supported 00:07:38.504 00:07:38.504 Health Information 00:07:38.504 ================== 00:07:38.504 Critical Warnings: 00:07:38.504 Available Spare Space: OK 00:07:38.504 Temperature: OK 00:07:38.504 Device Reliability: OK 00:07:38.504 Read Only: No 00:07:38.504 Volatile Memory Backup: OK 00:07:38.504 Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.504 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:38.504 Available Spare: 0% 00:07:38.504 Available Spare Threshold: 0% 00:07:38.504 Life Percentage Used: 0% 00:07:38.504 Data Units Read: 2154 00:07:38.504 Data Units Written: 1941 00:07:38.504 Host Read Commands: 115646 00:07:38.504 Host Write Commands: 113915 00:07:38.504 Controller Busy Time: 0 minutes 00:07:38.504 Power Cycles: 0 00:07:38.504 Power On Hours: 0 hours 00:07:38.504 Unsafe Shutdowns: 0 00:07:38.504 Unrecoverable Media Errors: 0 00:07:38.504 Lifetime Error Log Entries: 0 00:07:38.504 Warning Temperature Time: 0 minutes 00:07:38.504 Critical Temperature Time: 0 minutes 00:07:38.504 00:07:38.504 Number of Queues 00:07:38.504 ================ 00:07:38.504 Number of I/O Submission Queues: 64 00:07:38.504 Number of I/O Completion Queues: 64 00:07:38.504 00:07:38.504 ZNS Specific Controller Data 00:07:38.504 ============================ 00:07:38.504 Zone Append Size Limit: 0 00:07:38.504 00:07:38.504 00:07:38.504 Active Namespaces 00:07:38.504 ================= 00:07:38.504 Namespace ID:1 00:07:38.504 Error Recovery Timeout: Unlimited 00:07:38.504 Command Set Identifier: NVM (00h) 00:07:38.504 Deallocate: Supported 00:07:38.504 Deallocated/Unwritten Error: Supported 00:07:38.504 Deallocated Read Value: All 0x00 00:07:38.504 Deallocate in Write Zeroes: Not Supported 00:07:38.504 Deallocated Guard Field: 0xFFFF 00:07:38.504 Flush: Supported 00:07:38.504 Reservation: Not Supported 00:07:38.504 Namespace Sharing Capabilities: Private 00:07:38.504 Size (in LBAs): 1048576 (4GiB) 00:07:38.504 Capacity (in LBAs): 1048576 (4GiB) 00:07:38.504 Utilization (in LBAs): 1048576 (4GiB) 00:07:38.504 Thin Provisioning: Not Supported 00:07:38.504 Per-NS Atomic Units: No 00:07:38.504 Maximum Single Source Range Length: 128 00:07:38.504 Maximum Copy Length: 128 00:07:38.504 Maximum Source Range Count: 128 00:07:38.504 NGUID/EUI64 Never Reused: No 00:07:38.504 Namespace Write Protected: No 00:07:38.504 Number of LBA Formats: 8 00:07:38.504 Current LBA Format: LBA Format #04 00:07:38.504 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.504 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.504 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.504 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.504 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.504 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.504 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.504 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.504 00:07:38.504 NVM Specific Namespace Data 00:07:38.504 =========================== 00:07:38.504 Logical Block Storage Tag Mask: 0 00:07:38.504 Protection Information Capabilities: 00:07:38.504 16b Guard Protection Information Storage Tag Support: No 00:07:38.504 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.504 Storage Tag Check Read Support: No 00:07:38.504 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.504 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.504 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.504 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.504 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.504 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.504 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.504 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.504 Namespace ID:2 00:07:38.504 Error Recovery Timeout: Unlimited 00:07:38.504 Command Set Identifier: NVM (00h) 00:07:38.504 Deallocate: Supported 00:07:38.504 Deallocated/Unwritten Error: Supported 00:07:38.504 Deallocated Read Value: All 0x00 00:07:38.504 Deallocate in Write Zeroes: Not Supported 00:07:38.504 Deallocated Guard Field: 0xFFFF 00:07:38.504 Flush: Supported 00:07:38.504 Reservation: Not Supported 00:07:38.504 Namespace Sharing Capabilities: Private 00:07:38.504 Size (in LBAs): 1048576 (4GiB) 00:07:38.504 Capacity (in LBAs): 1048576 (4GiB) 00:07:38.504 Utilization (in LBAs): 1048576 (4GiB) 00:07:38.504 Thin Provisioning: Not Supported 00:07:38.504 Per-NS Atomic Units: No 00:07:38.504 Maximum Single Source Range Length: 128 00:07:38.504 Maximum Copy Length: 128 00:07:38.504 Maximum Source Range Count: 128 00:07:38.504 NGUID/EUI64 Never Reused: No 00:07:38.504 Namespace Write Protected: No 00:07:38.504 Number of LBA Formats: 8 00:07:38.504 Current LBA Format: LBA Format #04 00:07:38.504 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.504 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.504 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.504 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.504 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.504 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.504 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.504 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.504 00:07:38.504 NVM Specific Namespace Data 00:07:38.504 =========================== 00:07:38.504 Logical Block Storage Tag Mask: 0 00:07:38.504 Protection Information Capabilities: 00:07:38.504 16b Guard Protection Information Storage Tag Support: No 00:07:38.504 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.504 Storage Tag Check Read Support: No 00:07:38.504 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.504 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.504 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.504 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.504 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.504 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.504 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.504 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.504 Namespace ID:3 00:07:38.504 Error Recovery Timeout: Unlimited 00:07:38.504 Command Set Identifier: NVM (00h) 00:07:38.504 Deallocate: Supported 00:07:38.504 Deallocated/Unwritten Error: Supported 00:07:38.504 Deallocated Read Value: All 0x00 00:07:38.504 Deallocate in Write Zeroes: Not Supported 00:07:38.504 Deallocated Guard Field: 0xFFFF 00:07:38.504 Flush: Supported 00:07:38.504 Reservation: Not Supported 00:07:38.504 Namespace Sharing Capabilities: Private 00:07:38.504 Size (in LBAs): 1048576 (4GiB) 00:07:38.504 Capacity (in LBAs): 1048576 (4GiB) 00:07:38.504 Utilization (in LBAs): 1048576 (4GiB) 00:07:38.504 Thin Provisioning: Not Supported 00:07:38.504 Per-NS Atomic Units: No 00:07:38.504 Maximum Single Source Range Length: 128 00:07:38.504 Maximum Copy Length: 128 00:07:38.504 Maximum Source Range Count: 128 00:07:38.504 NGUID/EUI64 Never Reused: No 00:07:38.504 Namespace Write Protected: No 00:07:38.504 Number of LBA Formats: 8 00:07:38.504 Current LBA Format: LBA Format #04 00:07:38.504 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.505 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.505 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.505 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.505 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.505 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.505 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.505 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.505 00:07:38.505 NVM Specific Namespace Data 00:07:38.505 =========================== 00:07:38.505 Logical Block Storage Tag Mask: 0 00:07:38.505 Protection Information Capabilities: 00:07:38.505 16b Guard Protection Information Storage Tag Support: No 00:07:38.505 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.505 Storage Tag Check Read Support: No 00:07:38.505 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.505 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.505 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.505 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.505 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.505 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.505 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.505 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.505 01:33:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:38.505 01:33:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:38.765 ===================================================== 00:07:38.765 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:38.765 ===================================================== 00:07:38.765 Controller Capabilities/Features 00:07:38.765 ================================ 00:07:38.765 Vendor ID: 1b36 00:07:38.765 Subsystem Vendor ID: 1af4 00:07:38.765 Serial Number: 12340 00:07:38.765 Model Number: QEMU NVMe Ctrl 00:07:38.765 Firmware Version: 8.0.0 00:07:38.765 Recommended Arb Burst: 6 00:07:38.765 IEEE OUI Identifier: 00 54 52 00:07:38.765 Multi-path I/O 00:07:38.765 May have multiple subsystem ports: No 00:07:38.765 May have multiple controllers: No 00:07:38.765 Associated with SR-IOV VF: No 00:07:38.765 Max Data Transfer Size: 524288 00:07:38.765 Max Number of Namespaces: 256 00:07:38.765 Max Number of I/O Queues: 64 00:07:38.765 NVMe Specification Version (VS): 1.4 00:07:38.765 NVMe Specification Version (Identify): 1.4 00:07:38.765 Maximum Queue Entries: 2048 00:07:38.765 Contiguous Queues Required: Yes 00:07:38.765 Arbitration Mechanisms Supported 00:07:38.765 Weighted Round Robin: Not Supported 00:07:38.765 Vendor Specific: Not Supported 00:07:38.765 Reset Timeout: 7500 ms 00:07:38.765 Doorbell Stride: 4 bytes 00:07:38.765 NVM Subsystem Reset: Not Supported 00:07:38.765 Command Sets Supported 00:07:38.765 NVM Command Set: Supported 00:07:38.765 Boot Partition: Not Supported 00:07:38.765 Memory Page Size Minimum: 4096 bytes 00:07:38.765 Memory Page Size Maximum: 65536 bytes 00:07:38.765 Persistent Memory Region: Not Supported 00:07:38.765 Optional Asynchronous Events Supported 00:07:38.765 Namespace Attribute Notices: Supported 00:07:38.765 Firmware Activation Notices: Not Supported 00:07:38.765 ANA Change Notices: Not Supported 00:07:38.765 PLE Aggregate Log Change Notices: Not Supported 00:07:38.765 LBA Status Info Alert Notices: Not Supported 00:07:38.765 EGE Aggregate Log Change Notices: Not Supported 00:07:38.765 Normal NVM Subsystem Shutdown event: Not Supported 00:07:38.765 Zone Descriptor Change Notices: Not Supported 00:07:38.765 Discovery Log Change Notices: Not Supported 00:07:38.765 Controller Attributes 00:07:38.765 128-bit Host Identifier: Not Supported 00:07:38.765 Non-Operational Permissive Mode: Not Supported 00:07:38.765 NVM Sets: Not Supported 00:07:38.765 Read Recovery Levels: Not Supported 00:07:38.765 Endurance Groups: Not Supported 00:07:38.765 Predictable Latency Mode: Not Supported 00:07:38.765 Traffic Based Keep ALive: Not Supported 00:07:38.765 Namespace Granularity: Not Supported 00:07:38.765 SQ Associations: Not Supported 00:07:38.765 UUID List: Not Supported 00:07:38.765 Multi-Domain Subsystem: Not Supported 00:07:38.765 Fixed Capacity Management: Not Supported 00:07:38.765 Variable Capacity Management: Not Supported 00:07:38.765 Delete Endurance Group: Not Supported 00:07:38.765 Delete NVM Set: Not Supported 00:07:38.765 Extended LBA Formats Supported: Supported 00:07:38.765 Flexible Data Placement Supported: Not Supported 00:07:38.765 00:07:38.765 Controller Memory Buffer Support 00:07:38.765 ================================ 00:07:38.765 Supported: No 00:07:38.765 00:07:38.765 Persistent Memory Region Support 00:07:38.765 ================================ 00:07:38.765 Supported: No 00:07:38.765 00:07:38.765 Admin Command Set Attributes 00:07:38.765 ============================ 00:07:38.765 Security Send/Receive: Not Supported 00:07:38.765 Format NVM: Supported 00:07:38.765 Firmware Activate/Download: Not Supported 00:07:38.765 Namespace Management: Supported 00:07:38.765 Device Self-Test: Not Supported 00:07:38.765 Directives: Supported 00:07:38.765 NVMe-MI: Not Supported 00:07:38.765 Virtualization Management: Not Supported 00:07:38.765 Doorbell Buffer Config: Supported 00:07:38.765 Get LBA Status Capability: Not Supported 00:07:38.765 Command & Feature Lockdown Capability: Not Supported 00:07:38.765 Abort Command Limit: 4 00:07:38.765 Async Event Request Limit: 4 00:07:38.765 Number of Firmware Slots: N/A 00:07:38.765 Firmware Slot 1 Read-Only: N/A 00:07:38.765 Firmware Activation Without Reset: N/A 00:07:38.766 Multiple Update Detection Support: N/A 00:07:38.766 Firmware Update Granularity: No Information Provided 00:07:38.766 Per-Namespace SMART Log: Yes 00:07:38.766 Asymmetric Namespace Access Log Page: Not Supported 00:07:38.766 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:38.766 Command Effects Log Page: Supported 00:07:38.766 Get Log Page Extended Data: Supported 00:07:38.766 Telemetry Log Pages: Not Supported 00:07:38.766 Persistent Event Log Pages: Not Supported 00:07:38.766 Supported Log Pages Log Page: May Support 00:07:38.766 Commands Supported & Effects Log Page: Not Supported 00:07:38.766 Feature Identifiers & Effects Log Page:May Support 00:07:38.766 NVMe-MI Commands & Effects Log Page: May Support 00:07:38.766 Data Area 4 for Telemetry Log: Not Supported 00:07:38.766 Error Log Page Entries Supported: 1 00:07:38.766 Keep Alive: Not Supported 00:07:38.766 00:07:38.766 NVM Command Set Attributes 00:07:38.766 ========================== 00:07:38.766 Submission Queue Entry Size 00:07:38.766 Max: 64 00:07:38.766 Min: 64 00:07:38.766 Completion Queue Entry Size 00:07:38.766 Max: 16 00:07:38.766 Min: 16 00:07:38.766 Number of Namespaces: 256 00:07:38.766 Compare Command: Supported 00:07:38.766 Write Uncorrectable Command: Not Supported 00:07:38.766 Dataset Management Command: Supported 00:07:38.766 Write Zeroes Command: Supported 00:07:38.766 Set Features Save Field: Supported 00:07:38.766 Reservations: Not Supported 00:07:38.766 Timestamp: Supported 00:07:38.766 Copy: Supported 00:07:38.766 Volatile Write Cache: Present 00:07:38.766 Atomic Write Unit (Normal): 1 00:07:38.766 Atomic Write Unit (PFail): 1 00:07:38.766 Atomic Compare & Write Unit: 1 00:07:38.766 Fused Compare & Write: Not Supported 00:07:38.766 Scatter-Gather List 00:07:38.766 SGL Command Set: Supported 00:07:38.766 SGL Keyed: Not Supported 00:07:38.766 SGL Bit Bucket Descriptor: Not Supported 00:07:38.766 SGL Metadata Pointer: Not Supported 00:07:38.766 Oversized SGL: Not Supported 00:07:38.766 SGL Metadata Address: Not Supported 00:07:38.766 SGL Offset: Not Supported 00:07:38.766 Transport SGL Data Block: Not Supported 00:07:38.766 Replay Protected Memory Block: Not Supported 00:07:38.766 00:07:38.766 Firmware Slot Information 00:07:38.766 ========================= 00:07:38.766 Active slot: 1 00:07:38.766 Slot 1 Firmware Revision: 1.0 00:07:38.766 00:07:38.766 00:07:38.766 Commands Supported and Effects 00:07:38.766 ============================== 00:07:38.766 Admin Commands 00:07:38.766 -------------- 00:07:38.766 Delete I/O Submission Queue (00h): Supported 00:07:38.766 Create I/O Submission Queue (01h): Supported 00:07:38.766 Get Log Page (02h): Supported 00:07:38.766 Delete I/O Completion Queue (04h): Supported 00:07:38.766 Create I/O Completion Queue (05h): Supported 00:07:38.766 Identify (06h): Supported 00:07:38.766 Abort (08h): Supported 00:07:38.766 Set Features (09h): Supported 00:07:38.766 Get Features (0Ah): Supported 00:07:38.766 Asynchronous Event Request (0Ch): Supported 00:07:38.766 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:38.766 Directive Send (19h): Supported 00:07:38.766 Directive Receive (1Ah): Supported 00:07:38.766 Virtualization Management (1Ch): Supported 00:07:38.766 Doorbell Buffer Config (7Ch): Supported 00:07:38.766 Format NVM (80h): Supported LBA-Change 00:07:38.766 I/O Commands 00:07:38.766 ------------ 00:07:38.766 Flush (00h): Supported LBA-Change 00:07:38.766 Write (01h): Supported LBA-Change 00:07:38.766 Read (02h): Supported 00:07:38.766 Compare (05h): Supported 00:07:38.766 Write Zeroes (08h): Supported LBA-Change 00:07:38.766 Dataset Management (09h): Supported LBA-Change 00:07:38.766 Unknown (0Ch): Supported 00:07:38.766 Unknown (12h): Supported 00:07:38.766 Copy (19h): Supported LBA-Change 00:07:38.766 Unknown (1Dh): Supported LBA-Change 00:07:38.766 00:07:38.766 Error Log 00:07:38.766 ========= 00:07:38.766 00:07:38.766 Arbitration 00:07:38.766 =========== 00:07:38.766 Arbitration Burst: no limit 00:07:38.766 00:07:38.766 Power Management 00:07:38.766 ================ 00:07:38.766 Number of Power States: 1 00:07:38.766 Current Power State: Power State #0 00:07:38.766 Power State #0: 00:07:38.766 Max Power: 25.00 W 00:07:38.766 Non-Operational State: Operational 00:07:38.766 Entry Latency: 16 microseconds 00:07:38.766 Exit Latency: 4 microseconds 00:07:38.766 Relative Read Throughput: 0 00:07:38.766 Relative Read Latency: 0 00:07:38.766 Relative Write Throughput: 0 00:07:38.766 Relative Write Latency: 0 00:07:38.766 Idle Power: Not Reported 00:07:38.766 Active Power: Not Reported 00:07:38.766 Non-Operational Permissive Mode: Not Supported 00:07:38.766 00:07:38.766 Health Information 00:07:38.766 ================== 00:07:38.766 Critical Warnings: 00:07:38.766 Available Spare Space: OK 00:07:38.766 Temperature: OK 00:07:38.766 Device Reliability: OK 00:07:38.766 Read Only: No 00:07:38.766 Volatile Memory Backup: OK 00:07:38.766 Current Temperature: 323 Kelvin (50 Celsius) 00:07:38.766 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:38.766 Available Spare: 0% 00:07:38.766 Available Spare Threshold: 0% 00:07:38.766 Life Percentage Used: 0% 00:07:38.766 Data Units Read: 697 00:07:38.766 Data Units Written: 625 00:07:38.766 Host Read Commands: 38045 00:07:38.766 Host Write Commands: 37831 00:07:38.766 Controller Busy Time: 0 minutes 00:07:38.766 Power Cycles: 0 00:07:38.766 Power On Hours: 0 hours 00:07:38.766 Unsafe Shutdowns: 0 00:07:38.766 Unrecoverable Media Errors: 0 00:07:38.766 Lifetime Error Log Entries: 0 00:07:38.766 Warning Temperature Time: 0 minutes 00:07:38.766 Critical Temperature Time: 0 minutes 00:07:38.766 00:07:38.766 Number of Queues 00:07:38.766 ================ 00:07:38.766 Number of I/O Submission Queues: 64 00:07:38.766 Number of I/O Completion Queues: 64 00:07:38.766 00:07:38.766 ZNS Specific Controller Data 00:07:38.766 ============================ 00:07:38.766 Zone Append Size Limit: 0 00:07:38.766 00:07:38.766 00:07:38.766 Active Namespaces 00:07:38.766 ================= 00:07:38.766 Namespace ID:1 00:07:38.766 Error Recovery Timeout: Unlimited 00:07:38.766 Command Set Identifier: NVM (00h) 00:07:38.766 Deallocate: Supported 00:07:38.766 Deallocated/Unwritten Error: Supported 00:07:38.766 Deallocated Read Value: All 0x00 00:07:38.766 Deallocate in Write Zeroes: Not Supported 00:07:38.766 Deallocated Guard Field: 0xFFFF 00:07:38.766 Flush: Supported 00:07:38.766 Reservation: Not Supported 00:07:38.766 Metadata Transferred as: Separate Metadata Buffer 00:07:38.766 Namespace Sharing Capabilities: Private 00:07:38.766 Size (in LBAs): 1548666 (5GiB) 00:07:38.766 Capacity (in LBAs): 1548666 (5GiB) 00:07:38.766 Utilization (in LBAs): 1548666 (5GiB) 00:07:38.766 Thin Provisioning: Not Supported 00:07:38.766 Per-NS Atomic Units: No 00:07:38.766 Maximum Single Source Range Length: 128 00:07:38.766 Maximum Copy Length: 128 00:07:38.766 Maximum Source Range Count: 128 00:07:38.766 NGUID/EUI64 Never Reused: No 00:07:38.766 Namespace Write Protected: No 00:07:38.766 Number of LBA Formats: 8 00:07:38.766 Current LBA Format: LBA Format #07 00:07:38.766 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:38.766 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:38.766 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:38.766 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:38.766 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:38.766 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:38.766 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:38.766 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:38.766 00:07:38.766 NVM Specific Namespace Data 00:07:38.766 =========================== 00:07:38.766 Logical Block Storage Tag Mask: 0 00:07:38.766 Protection Information Capabilities: 00:07:38.766 16b Guard Protection Information Storage Tag Support: No 00:07:38.766 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:38.766 Storage Tag Check Read Support: No 00:07:38.766 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.766 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.766 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.766 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.766 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.766 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.766 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.766 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:38.766 01:33:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:38.767 01:33:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:39.025 ===================================================== 00:07:39.025 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:39.025 ===================================================== 00:07:39.025 Controller Capabilities/Features 00:07:39.025 ================================ 00:07:39.025 Vendor ID: 1b36 00:07:39.025 Subsystem Vendor ID: 1af4 00:07:39.025 Serial Number: 12341 00:07:39.025 Model Number: QEMU NVMe Ctrl 00:07:39.025 Firmware Version: 8.0.0 00:07:39.025 Recommended Arb Burst: 6 00:07:39.025 IEEE OUI Identifier: 00 54 52 00:07:39.025 Multi-path I/O 00:07:39.025 May have multiple subsystem ports: No 00:07:39.025 May have multiple controllers: No 00:07:39.026 Associated with SR-IOV VF: No 00:07:39.026 Max Data Transfer Size: 524288 00:07:39.026 Max Number of Namespaces: 256 00:07:39.026 Max Number of I/O Queues: 64 00:07:39.026 NVMe Specification Version (VS): 1.4 00:07:39.026 NVMe Specification Version (Identify): 1.4 00:07:39.026 Maximum Queue Entries: 2048 00:07:39.026 Contiguous Queues Required: Yes 00:07:39.026 Arbitration Mechanisms Supported 00:07:39.026 Weighted Round Robin: Not Supported 00:07:39.026 Vendor Specific: Not Supported 00:07:39.026 Reset Timeout: 7500 ms 00:07:39.026 Doorbell Stride: 4 bytes 00:07:39.026 NVM Subsystem Reset: Not Supported 00:07:39.026 Command Sets Supported 00:07:39.026 NVM Command Set: Supported 00:07:39.026 Boot Partition: Not Supported 00:07:39.026 Memory Page Size Minimum: 4096 bytes 00:07:39.026 Memory Page Size Maximum: 65536 bytes 00:07:39.026 Persistent Memory Region: Not Supported 00:07:39.026 Optional Asynchronous Events Supported 00:07:39.026 Namespace Attribute Notices: Supported 00:07:39.026 Firmware Activation Notices: Not Supported 00:07:39.026 ANA Change Notices: Not Supported 00:07:39.026 PLE Aggregate Log Change Notices: Not Supported 00:07:39.026 LBA Status Info Alert Notices: Not Supported 00:07:39.026 EGE Aggregate Log Change Notices: Not Supported 00:07:39.026 Normal NVM Subsystem Shutdown event: Not Supported 00:07:39.026 Zone Descriptor Change Notices: Not Supported 00:07:39.026 Discovery Log Change Notices: Not Supported 00:07:39.026 Controller Attributes 00:07:39.026 128-bit Host Identifier: Not Supported 00:07:39.026 Non-Operational Permissive Mode: Not Supported 00:07:39.026 NVM Sets: Not Supported 00:07:39.026 Read Recovery Levels: Not Supported 00:07:39.026 Endurance Groups: Not Supported 00:07:39.026 Predictable Latency Mode: Not Supported 00:07:39.026 Traffic Based Keep ALive: Not Supported 00:07:39.026 Namespace Granularity: Not Supported 00:07:39.026 SQ Associations: Not Supported 00:07:39.026 UUID List: Not Supported 00:07:39.026 Multi-Domain Subsystem: Not Supported 00:07:39.026 Fixed Capacity Management: Not Supported 00:07:39.026 Variable Capacity Management: Not Supported 00:07:39.026 Delete Endurance Group: Not Supported 00:07:39.026 Delete NVM Set: Not Supported 00:07:39.026 Extended LBA Formats Supported: Supported 00:07:39.026 Flexible Data Placement Supported: Not Supported 00:07:39.026 00:07:39.026 Controller Memory Buffer Support 00:07:39.026 ================================ 00:07:39.026 Supported: No 00:07:39.026 00:07:39.026 Persistent Memory Region Support 00:07:39.026 ================================ 00:07:39.026 Supported: No 00:07:39.026 00:07:39.026 Admin Command Set Attributes 00:07:39.026 ============================ 00:07:39.026 Security Send/Receive: Not Supported 00:07:39.026 Format NVM: Supported 00:07:39.026 Firmware Activate/Download: Not Supported 00:07:39.026 Namespace Management: Supported 00:07:39.026 Device Self-Test: Not Supported 00:07:39.026 Directives: Supported 00:07:39.026 NVMe-MI: Not Supported 00:07:39.026 Virtualization Management: Not Supported 00:07:39.026 Doorbell Buffer Config: Supported 00:07:39.026 Get LBA Status Capability: Not Supported 00:07:39.026 Command & Feature Lockdown Capability: Not Supported 00:07:39.026 Abort Command Limit: 4 00:07:39.026 Async Event Request Limit: 4 00:07:39.026 Number of Firmware Slots: N/A 00:07:39.026 Firmware Slot 1 Read-Only: N/A 00:07:39.026 Firmware Activation Without Reset: N/A 00:07:39.026 Multiple Update Detection Support: N/A 00:07:39.026 Firmware Update Granularity: No Information Provided 00:07:39.026 Per-Namespace SMART Log: Yes 00:07:39.026 Asymmetric Namespace Access Log Page: Not Supported 00:07:39.026 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:39.026 Command Effects Log Page: Supported 00:07:39.026 Get Log Page Extended Data: Supported 00:07:39.026 Telemetry Log Pages: Not Supported 00:07:39.026 Persistent Event Log Pages: Not Supported 00:07:39.026 Supported Log Pages Log Page: May Support 00:07:39.026 Commands Supported & Effects Log Page: Not Supported 00:07:39.026 Feature Identifiers & Effects Log Page:May Support 00:07:39.026 NVMe-MI Commands & Effects Log Page: May Support 00:07:39.026 Data Area 4 for Telemetry Log: Not Supported 00:07:39.026 Error Log Page Entries Supported: 1 00:07:39.026 Keep Alive: Not Supported 00:07:39.026 00:07:39.026 NVM Command Set Attributes 00:07:39.026 ========================== 00:07:39.026 Submission Queue Entry Size 00:07:39.026 Max: 64 00:07:39.026 Min: 64 00:07:39.026 Completion Queue Entry Size 00:07:39.026 Max: 16 00:07:39.026 Min: 16 00:07:39.026 Number of Namespaces: 256 00:07:39.026 Compare Command: Supported 00:07:39.026 Write Uncorrectable Command: Not Supported 00:07:39.026 Dataset Management Command: Supported 00:07:39.026 Write Zeroes Command: Supported 00:07:39.026 Set Features Save Field: Supported 00:07:39.026 Reservations: Not Supported 00:07:39.026 Timestamp: Supported 00:07:39.026 Copy: Supported 00:07:39.026 Volatile Write Cache: Present 00:07:39.026 Atomic Write Unit (Normal): 1 00:07:39.026 Atomic Write Unit (PFail): 1 00:07:39.026 Atomic Compare & Write Unit: 1 00:07:39.026 Fused Compare & Write: Not Supported 00:07:39.026 Scatter-Gather List 00:07:39.026 SGL Command Set: Supported 00:07:39.026 SGL Keyed: Not Supported 00:07:39.026 SGL Bit Bucket Descriptor: Not Supported 00:07:39.026 SGL Metadata Pointer: Not Supported 00:07:39.026 Oversized SGL: Not Supported 00:07:39.026 SGL Metadata Address: Not Supported 00:07:39.026 SGL Offset: Not Supported 00:07:39.026 Transport SGL Data Block: Not Supported 00:07:39.026 Replay Protected Memory Block: Not Supported 00:07:39.026 00:07:39.026 Firmware Slot Information 00:07:39.026 ========================= 00:07:39.026 Active slot: 1 00:07:39.026 Slot 1 Firmware Revision: 1.0 00:07:39.026 00:07:39.026 00:07:39.026 Commands Supported and Effects 00:07:39.026 ============================== 00:07:39.026 Admin Commands 00:07:39.026 -------------- 00:07:39.026 Delete I/O Submission Queue (00h): Supported 00:07:39.026 Create I/O Submission Queue (01h): Supported 00:07:39.026 Get Log Page (02h): Supported 00:07:39.026 Delete I/O Completion Queue (04h): Supported 00:07:39.026 Create I/O Completion Queue (05h): Supported 00:07:39.026 Identify (06h): Supported 00:07:39.026 Abort (08h): Supported 00:07:39.026 Set Features (09h): Supported 00:07:39.026 Get Features (0Ah): Supported 00:07:39.026 Asynchronous Event Request (0Ch): Supported 00:07:39.026 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:39.026 Directive Send (19h): Supported 00:07:39.026 Directive Receive (1Ah): Supported 00:07:39.026 Virtualization Management (1Ch): Supported 00:07:39.026 Doorbell Buffer Config (7Ch): Supported 00:07:39.026 Format NVM (80h): Supported LBA-Change 00:07:39.026 I/O Commands 00:07:39.026 ------------ 00:07:39.026 Flush (00h): Supported LBA-Change 00:07:39.026 Write (01h): Supported LBA-Change 00:07:39.026 Read (02h): Supported 00:07:39.026 Compare (05h): Supported 00:07:39.026 Write Zeroes (08h): Supported LBA-Change 00:07:39.026 Dataset Management (09h): Supported LBA-Change 00:07:39.026 Unknown (0Ch): Supported 00:07:39.026 Unknown (12h): Supported 00:07:39.026 Copy (19h): Supported LBA-Change 00:07:39.026 Unknown (1Dh): Supported LBA-Change 00:07:39.026 00:07:39.026 Error Log 00:07:39.026 ========= 00:07:39.026 00:07:39.026 Arbitration 00:07:39.026 =========== 00:07:39.026 Arbitration Burst: no limit 00:07:39.026 00:07:39.026 Power Management 00:07:39.026 ================ 00:07:39.026 Number of Power States: 1 00:07:39.026 Current Power State: Power State #0 00:07:39.026 Power State #0: 00:07:39.026 Max Power: 25.00 W 00:07:39.026 Non-Operational State: Operational 00:07:39.026 Entry Latency: 16 microseconds 00:07:39.026 Exit Latency: 4 microseconds 00:07:39.026 Relative Read Throughput: 0 00:07:39.026 Relative Read Latency: 0 00:07:39.026 Relative Write Throughput: 0 00:07:39.026 Relative Write Latency: 0 00:07:39.026 Idle Power: Not Reported 00:07:39.026 Active Power: Not Reported 00:07:39.026 Non-Operational Permissive Mode: Not Supported 00:07:39.026 00:07:39.026 Health Information 00:07:39.026 ================== 00:07:39.026 Critical Warnings: 00:07:39.026 Available Spare Space: OK 00:07:39.026 Temperature: OK 00:07:39.026 Device Reliability: OK 00:07:39.026 Read Only: No 00:07:39.026 Volatile Memory Backup: OK 00:07:39.026 Current Temperature: 323 Kelvin (50 Celsius) 00:07:39.026 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:39.026 Available Spare: 0% 00:07:39.026 Available Spare Threshold: 0% 00:07:39.026 Life Percentage Used: 0% 00:07:39.026 Data Units Read: 1077 00:07:39.026 Data Units Written: 944 00:07:39.027 Host Read Commands: 56416 00:07:39.027 Host Write Commands: 55207 00:07:39.027 Controller Busy Time: 0 minutes 00:07:39.027 Power Cycles: 0 00:07:39.027 Power On Hours: 0 hours 00:07:39.027 Unsafe Shutdowns: 0 00:07:39.027 Unrecoverable Media Errors: 0 00:07:39.027 Lifetime Error Log Entries: 0 00:07:39.027 Warning Temperature Time: 0 minutes 00:07:39.027 Critical Temperature Time: 0 minutes 00:07:39.027 00:07:39.027 Number of Queues 00:07:39.027 ================ 00:07:39.027 Number of I/O Submission Queues: 64 00:07:39.027 Number of I/O Completion Queues: 64 00:07:39.027 00:07:39.027 ZNS Specific Controller Data 00:07:39.027 ============================ 00:07:39.027 Zone Append Size Limit: 0 00:07:39.027 00:07:39.027 00:07:39.027 Active Namespaces 00:07:39.027 ================= 00:07:39.027 Namespace ID:1 00:07:39.027 Error Recovery Timeout: Unlimited 00:07:39.027 Command Set Identifier: NVM (00h) 00:07:39.027 Deallocate: Supported 00:07:39.027 Deallocated/Unwritten Error: Supported 00:07:39.027 Deallocated Read Value: All 0x00 00:07:39.027 Deallocate in Write Zeroes: Not Supported 00:07:39.027 Deallocated Guard Field: 0xFFFF 00:07:39.027 Flush: Supported 00:07:39.027 Reservation: Not Supported 00:07:39.027 Namespace Sharing Capabilities: Private 00:07:39.027 Size (in LBAs): 1310720 (5GiB) 00:07:39.027 Capacity (in LBAs): 1310720 (5GiB) 00:07:39.027 Utilization (in LBAs): 1310720 (5GiB) 00:07:39.027 Thin Provisioning: Not Supported 00:07:39.027 Per-NS Atomic Units: No 00:07:39.027 Maximum Single Source Range Length: 128 00:07:39.027 Maximum Copy Length: 128 00:07:39.027 Maximum Source Range Count: 128 00:07:39.027 NGUID/EUI64 Never Reused: No 00:07:39.027 Namespace Write Protected: No 00:07:39.027 Number of LBA Formats: 8 00:07:39.027 Current LBA Format: LBA Format #04 00:07:39.027 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:39.027 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:39.027 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:39.027 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:39.027 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:39.027 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:39.027 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:39.027 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:39.027 00:07:39.027 NVM Specific Namespace Data 00:07:39.027 =========================== 00:07:39.027 Logical Block Storage Tag Mask: 0 00:07:39.027 Protection Information Capabilities: 00:07:39.027 16b Guard Protection Information Storage Tag Support: No 00:07:39.027 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:39.027 Storage Tag Check Read Support: No 00:07:39.027 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.027 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.027 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.027 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.027 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.027 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.027 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.027 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.027 01:33:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:39.027 01:33:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:39.286 ===================================================== 00:07:39.286 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:39.286 ===================================================== 00:07:39.286 Controller Capabilities/Features 00:07:39.286 ================================ 00:07:39.286 Vendor ID: 1b36 00:07:39.286 Subsystem Vendor ID: 1af4 00:07:39.286 Serial Number: 12342 00:07:39.286 Model Number: QEMU NVMe Ctrl 00:07:39.286 Firmware Version: 8.0.0 00:07:39.286 Recommended Arb Burst: 6 00:07:39.286 IEEE OUI Identifier: 00 54 52 00:07:39.286 Multi-path I/O 00:07:39.286 May have multiple subsystem ports: No 00:07:39.286 May have multiple controllers: No 00:07:39.286 Associated with SR-IOV VF: No 00:07:39.286 Max Data Transfer Size: 524288 00:07:39.286 Max Number of Namespaces: 256 00:07:39.286 Max Number of I/O Queues: 64 00:07:39.286 NVMe Specification Version (VS): 1.4 00:07:39.286 NVMe Specification Version (Identify): 1.4 00:07:39.286 Maximum Queue Entries: 2048 00:07:39.286 Contiguous Queues Required: Yes 00:07:39.286 Arbitration Mechanisms Supported 00:07:39.286 Weighted Round Robin: Not Supported 00:07:39.286 Vendor Specific: Not Supported 00:07:39.286 Reset Timeout: 7500 ms 00:07:39.286 Doorbell Stride: 4 bytes 00:07:39.286 NVM Subsystem Reset: Not Supported 00:07:39.286 Command Sets Supported 00:07:39.286 NVM Command Set: Supported 00:07:39.286 Boot Partition: Not Supported 00:07:39.286 Memory Page Size Minimum: 4096 bytes 00:07:39.286 Memory Page Size Maximum: 65536 bytes 00:07:39.286 Persistent Memory Region: Not Supported 00:07:39.286 Optional Asynchronous Events Supported 00:07:39.286 Namespace Attribute Notices: Supported 00:07:39.286 Firmware Activation Notices: Not Supported 00:07:39.286 ANA Change Notices: Not Supported 00:07:39.286 PLE Aggregate Log Change Notices: Not Supported 00:07:39.286 LBA Status Info Alert Notices: Not Supported 00:07:39.286 EGE Aggregate Log Change Notices: Not Supported 00:07:39.286 Normal NVM Subsystem Shutdown event: Not Supported 00:07:39.286 Zone Descriptor Change Notices: Not Supported 00:07:39.286 Discovery Log Change Notices: Not Supported 00:07:39.286 Controller Attributes 00:07:39.286 128-bit Host Identifier: Not Supported 00:07:39.286 Non-Operational Permissive Mode: Not Supported 00:07:39.286 NVM Sets: Not Supported 00:07:39.286 Read Recovery Levels: Not Supported 00:07:39.286 Endurance Groups: Not Supported 00:07:39.286 Predictable Latency Mode: Not Supported 00:07:39.286 Traffic Based Keep ALive: Not Supported 00:07:39.286 Namespace Granularity: Not Supported 00:07:39.286 SQ Associations: Not Supported 00:07:39.286 UUID List: Not Supported 00:07:39.286 Multi-Domain Subsystem: Not Supported 00:07:39.286 Fixed Capacity Management: Not Supported 00:07:39.286 Variable Capacity Management: Not Supported 00:07:39.286 Delete Endurance Group: Not Supported 00:07:39.286 Delete NVM Set: Not Supported 00:07:39.286 Extended LBA Formats Supported: Supported 00:07:39.286 Flexible Data Placement Supported: Not Supported 00:07:39.286 00:07:39.286 Controller Memory Buffer Support 00:07:39.286 ================================ 00:07:39.286 Supported: No 00:07:39.286 00:07:39.286 Persistent Memory Region Support 00:07:39.286 ================================ 00:07:39.286 Supported: No 00:07:39.286 00:07:39.286 Admin Command Set Attributes 00:07:39.286 ============================ 00:07:39.286 Security Send/Receive: Not Supported 00:07:39.286 Format NVM: Supported 00:07:39.286 Firmware Activate/Download: Not Supported 00:07:39.286 Namespace Management: Supported 00:07:39.286 Device Self-Test: Not Supported 00:07:39.287 Directives: Supported 00:07:39.287 NVMe-MI: Not Supported 00:07:39.287 Virtualization Management: Not Supported 00:07:39.287 Doorbell Buffer Config: Supported 00:07:39.287 Get LBA Status Capability: Not Supported 00:07:39.287 Command & Feature Lockdown Capability: Not Supported 00:07:39.287 Abort Command Limit: 4 00:07:39.287 Async Event Request Limit: 4 00:07:39.287 Number of Firmware Slots: N/A 00:07:39.287 Firmware Slot 1 Read-Only: N/A 00:07:39.287 Firmware Activation Without Reset: N/A 00:07:39.287 Multiple Update Detection Support: N/A 00:07:39.287 Firmware Update Granularity: No Information Provided 00:07:39.287 Per-Namespace SMART Log: Yes 00:07:39.287 Asymmetric Namespace Access Log Page: Not Supported 00:07:39.287 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:39.287 Command Effects Log Page: Supported 00:07:39.287 Get Log Page Extended Data: Supported 00:07:39.287 Telemetry Log Pages: Not Supported 00:07:39.287 Persistent Event Log Pages: Not Supported 00:07:39.287 Supported Log Pages Log Page: May Support 00:07:39.287 Commands Supported & Effects Log Page: Not Supported 00:07:39.287 Feature Identifiers & Effects Log Page:May Support 00:07:39.287 NVMe-MI Commands & Effects Log Page: May Support 00:07:39.287 Data Area 4 for Telemetry Log: Not Supported 00:07:39.287 Error Log Page Entries Supported: 1 00:07:39.287 Keep Alive: Not Supported 00:07:39.287 00:07:39.287 NVM Command Set Attributes 00:07:39.287 ========================== 00:07:39.287 Submission Queue Entry Size 00:07:39.287 Max: 64 00:07:39.287 Min: 64 00:07:39.287 Completion Queue Entry Size 00:07:39.287 Max: 16 00:07:39.287 Min: 16 00:07:39.287 Number of Namespaces: 256 00:07:39.287 Compare Command: Supported 00:07:39.287 Write Uncorrectable Command: Not Supported 00:07:39.287 Dataset Management Command: Supported 00:07:39.287 Write Zeroes Command: Supported 00:07:39.287 Set Features Save Field: Supported 00:07:39.287 Reservations: Not Supported 00:07:39.287 Timestamp: Supported 00:07:39.287 Copy: Supported 00:07:39.287 Volatile Write Cache: Present 00:07:39.287 Atomic Write Unit (Normal): 1 00:07:39.287 Atomic Write Unit (PFail): 1 00:07:39.287 Atomic Compare & Write Unit: 1 00:07:39.287 Fused Compare & Write: Not Supported 00:07:39.287 Scatter-Gather List 00:07:39.287 SGL Command Set: Supported 00:07:39.287 SGL Keyed: Not Supported 00:07:39.287 SGL Bit Bucket Descriptor: Not Supported 00:07:39.287 SGL Metadata Pointer: Not Supported 00:07:39.287 Oversized SGL: Not Supported 00:07:39.287 SGL Metadata Address: Not Supported 00:07:39.287 SGL Offset: Not Supported 00:07:39.287 Transport SGL Data Block: Not Supported 00:07:39.287 Replay Protected Memory Block: Not Supported 00:07:39.287 00:07:39.287 Firmware Slot Information 00:07:39.287 ========================= 00:07:39.287 Active slot: 1 00:07:39.287 Slot 1 Firmware Revision: 1.0 00:07:39.287 00:07:39.287 00:07:39.287 Commands Supported and Effects 00:07:39.287 ============================== 00:07:39.287 Admin Commands 00:07:39.287 -------------- 00:07:39.287 Delete I/O Submission Queue (00h): Supported 00:07:39.287 Create I/O Submission Queue (01h): Supported 00:07:39.287 Get Log Page (02h): Supported 00:07:39.287 Delete I/O Completion Queue (04h): Supported 00:07:39.287 Create I/O Completion Queue (05h): Supported 00:07:39.287 Identify (06h): Supported 00:07:39.287 Abort (08h): Supported 00:07:39.287 Set Features (09h): Supported 00:07:39.287 Get Features (0Ah): Supported 00:07:39.287 Asynchronous Event Request (0Ch): Supported 00:07:39.287 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:39.287 Directive Send (19h): Supported 00:07:39.287 Directive Receive (1Ah): Supported 00:07:39.287 Virtualization Management (1Ch): Supported 00:07:39.287 Doorbell Buffer Config (7Ch): Supported 00:07:39.287 Format NVM (80h): Supported LBA-Change 00:07:39.287 I/O Commands 00:07:39.287 ------------ 00:07:39.287 Flush (00h): Supported LBA-Change 00:07:39.287 Write (01h): Supported LBA-Change 00:07:39.287 Read (02h): Supported 00:07:39.287 Compare (05h): Supported 00:07:39.287 Write Zeroes (08h): Supported LBA-Change 00:07:39.287 Dataset Management (09h): Supported LBA-Change 00:07:39.287 Unknown (0Ch): Supported 00:07:39.287 Unknown (12h): Supported 00:07:39.287 Copy (19h): Supported LBA-Change 00:07:39.287 Unknown (1Dh): Supported LBA-Change 00:07:39.287 00:07:39.287 Error Log 00:07:39.287 ========= 00:07:39.287 00:07:39.287 Arbitration 00:07:39.287 =========== 00:07:39.287 Arbitration Burst: no limit 00:07:39.287 00:07:39.287 Power Management 00:07:39.287 ================ 00:07:39.287 Number of Power States: 1 00:07:39.287 Current Power State: Power State #0 00:07:39.287 Power State #0: 00:07:39.287 Max Power: 25.00 W 00:07:39.287 Non-Operational State: Operational 00:07:39.287 Entry Latency: 16 microseconds 00:07:39.287 Exit Latency: 4 microseconds 00:07:39.287 Relative Read Throughput: 0 00:07:39.287 Relative Read Latency: 0 00:07:39.287 Relative Write Throughput: 0 00:07:39.287 Relative Write Latency: 0 00:07:39.287 Idle Power: Not Reported 00:07:39.287 Active Power: Not Reported 00:07:39.287 Non-Operational Permissive Mode: Not Supported 00:07:39.287 00:07:39.287 Health Information 00:07:39.287 ================== 00:07:39.287 Critical Warnings: 00:07:39.287 Available Spare Space: OK 00:07:39.287 Temperature: OK 00:07:39.287 Device Reliability: OK 00:07:39.287 Read Only: No 00:07:39.287 Volatile Memory Backup: OK 00:07:39.287 Current Temperature: 323 Kelvin (50 Celsius) 00:07:39.287 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:39.287 Available Spare: 0% 00:07:39.287 Available Spare Threshold: 0% 00:07:39.287 Life Percentage Used: 0% 00:07:39.287 Data Units Read: 2154 00:07:39.287 Data Units Written: 1941 00:07:39.287 Host Read Commands: 115646 00:07:39.287 Host Write Commands: 113915 00:07:39.287 Controller Busy Time: 0 minutes 00:07:39.287 Power Cycles: 0 00:07:39.287 Power On Hours: 0 hours 00:07:39.287 Unsafe Shutdowns: 0 00:07:39.287 Unrecoverable Media Errors: 0 00:07:39.287 Lifetime Error Log Entries: 0 00:07:39.287 Warning Temperature Time: 0 minutes 00:07:39.287 Critical Temperature Time: 0 minutes 00:07:39.287 00:07:39.287 Number of Queues 00:07:39.287 ================ 00:07:39.287 Number of I/O Submission Queues: 64 00:07:39.287 Number of I/O Completion Queues: 64 00:07:39.287 00:07:39.287 ZNS Specific Controller Data 00:07:39.287 ============================ 00:07:39.287 Zone Append Size Limit: 0 00:07:39.287 00:07:39.287 00:07:39.287 Active Namespaces 00:07:39.287 ================= 00:07:39.287 Namespace ID:1 00:07:39.287 Error Recovery Timeout: Unlimited 00:07:39.287 Command Set Identifier: NVM (00h) 00:07:39.287 Deallocate: Supported 00:07:39.287 Deallocated/Unwritten Error: Supported 00:07:39.287 Deallocated Read Value: All 0x00 00:07:39.287 Deallocate in Write Zeroes: Not Supported 00:07:39.287 Deallocated Guard Field: 0xFFFF 00:07:39.287 Flush: Supported 00:07:39.287 Reservation: Not Supported 00:07:39.287 Namespace Sharing Capabilities: Private 00:07:39.287 Size (in LBAs): 1048576 (4GiB) 00:07:39.287 Capacity (in LBAs): 1048576 (4GiB) 00:07:39.287 Utilization (in LBAs): 1048576 (4GiB) 00:07:39.287 Thin Provisioning: Not Supported 00:07:39.287 Per-NS Atomic Units: No 00:07:39.287 Maximum Single Source Range Length: 128 00:07:39.287 Maximum Copy Length: 128 00:07:39.287 Maximum Source Range Count: 128 00:07:39.287 NGUID/EUI64 Never Reused: No 00:07:39.287 Namespace Write Protected: No 00:07:39.287 Number of LBA Formats: 8 00:07:39.287 Current LBA Format: LBA Format #04 00:07:39.287 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:39.287 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:39.287 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:39.287 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:39.287 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:39.287 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:39.287 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:39.287 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:39.287 00:07:39.287 NVM Specific Namespace Data 00:07:39.287 =========================== 00:07:39.287 Logical Block Storage Tag Mask: 0 00:07:39.287 Protection Information Capabilities: 00:07:39.287 16b Guard Protection Information Storage Tag Support: No 00:07:39.287 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:39.287 Storage Tag Check Read Support: No 00:07:39.287 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.287 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.287 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.287 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.287 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.288 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.288 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.288 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.288 Namespace ID:2 00:07:39.288 Error Recovery Timeout: Unlimited 00:07:39.288 Command Set Identifier: NVM (00h) 00:07:39.288 Deallocate: Supported 00:07:39.288 Deallocated/Unwritten Error: Supported 00:07:39.288 Deallocated Read Value: All 0x00 00:07:39.288 Deallocate in Write Zeroes: Not Supported 00:07:39.288 Deallocated Guard Field: 0xFFFF 00:07:39.288 Flush: Supported 00:07:39.288 Reservation: Not Supported 00:07:39.288 Namespace Sharing Capabilities: Private 00:07:39.288 Size (in LBAs): 1048576 (4GiB) 00:07:39.288 Capacity (in LBAs): 1048576 (4GiB) 00:07:39.288 Utilization (in LBAs): 1048576 (4GiB) 00:07:39.288 Thin Provisioning: Not Supported 00:07:39.288 Per-NS Atomic Units: No 00:07:39.288 Maximum Single Source Range Length: 128 00:07:39.288 Maximum Copy Length: 128 00:07:39.288 Maximum Source Range Count: 128 00:07:39.288 NGUID/EUI64 Never Reused: No 00:07:39.288 Namespace Write Protected: No 00:07:39.288 Number of LBA Formats: 8 00:07:39.288 Current LBA Format: LBA Format #04 00:07:39.288 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:39.288 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:39.288 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:39.288 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:39.288 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:39.288 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:39.288 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:39.288 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:39.288 00:07:39.288 NVM Specific Namespace Data 00:07:39.288 =========================== 00:07:39.288 Logical Block Storage Tag Mask: 0 00:07:39.288 Protection Information Capabilities: 00:07:39.288 16b Guard Protection Information Storage Tag Support: No 00:07:39.288 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:39.288 Storage Tag Check Read Support: No 00:07:39.288 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.288 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.288 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.288 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.288 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.288 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.288 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.288 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.288 Namespace ID:3 00:07:39.288 Error Recovery Timeout: Unlimited 00:07:39.288 Command Set Identifier: NVM (00h) 00:07:39.288 Deallocate: Supported 00:07:39.288 Deallocated/Unwritten Error: Supported 00:07:39.288 Deallocated Read Value: All 0x00 00:07:39.288 Deallocate in Write Zeroes: Not Supported 00:07:39.288 Deallocated Guard Field: 0xFFFF 00:07:39.288 Flush: Supported 00:07:39.288 Reservation: Not Supported 00:07:39.288 Namespace Sharing Capabilities: Private 00:07:39.288 Size (in LBAs): 1048576 (4GiB) 00:07:39.288 Capacity (in LBAs): 1048576 (4GiB) 00:07:39.288 Utilization (in LBAs): 1048576 (4GiB) 00:07:39.288 Thin Provisioning: Not Supported 00:07:39.288 Per-NS Atomic Units: No 00:07:39.288 Maximum Single Source Range Length: 128 00:07:39.288 Maximum Copy Length: 128 00:07:39.288 Maximum Source Range Count: 128 00:07:39.288 NGUID/EUI64 Never Reused: No 00:07:39.288 Namespace Write Protected: No 00:07:39.288 Number of LBA Formats: 8 00:07:39.288 Current LBA Format: LBA Format #04 00:07:39.288 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:39.288 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:39.288 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:39.288 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:39.288 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:39.288 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:39.288 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:39.288 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:39.288 00:07:39.288 NVM Specific Namespace Data 00:07:39.288 =========================== 00:07:39.288 Logical Block Storage Tag Mask: 0 00:07:39.288 Protection Information Capabilities: 00:07:39.288 16b Guard Protection Information Storage Tag Support: No 00:07:39.288 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:39.288 Storage Tag Check Read Support: No 00:07:39.288 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.288 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.288 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.288 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.288 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.288 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.288 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.288 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.288 01:33:23 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:39.288 01:33:23 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:39.547 ===================================================== 00:07:39.547 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:39.547 ===================================================== 00:07:39.547 Controller Capabilities/Features 00:07:39.547 ================================ 00:07:39.547 Vendor ID: 1b36 00:07:39.547 Subsystem Vendor ID: 1af4 00:07:39.547 Serial Number: 12343 00:07:39.547 Model Number: QEMU NVMe Ctrl 00:07:39.547 Firmware Version: 8.0.0 00:07:39.547 Recommended Arb Burst: 6 00:07:39.547 IEEE OUI Identifier: 00 54 52 00:07:39.547 Multi-path I/O 00:07:39.547 May have multiple subsystem ports: No 00:07:39.547 May have multiple controllers: Yes 00:07:39.547 Associated with SR-IOV VF: No 00:07:39.547 Max Data Transfer Size: 524288 00:07:39.547 Max Number of Namespaces: 256 00:07:39.547 Max Number of I/O Queues: 64 00:07:39.547 NVMe Specification Version (VS): 1.4 00:07:39.547 NVMe Specification Version (Identify): 1.4 00:07:39.547 Maximum Queue Entries: 2048 00:07:39.547 Contiguous Queues Required: Yes 00:07:39.547 Arbitration Mechanisms Supported 00:07:39.547 Weighted Round Robin: Not Supported 00:07:39.547 Vendor Specific: Not Supported 00:07:39.547 Reset Timeout: 7500 ms 00:07:39.547 Doorbell Stride: 4 bytes 00:07:39.547 NVM Subsystem Reset: Not Supported 00:07:39.547 Command Sets Supported 00:07:39.547 NVM Command Set: Supported 00:07:39.547 Boot Partition: Not Supported 00:07:39.547 Memory Page Size Minimum: 4096 bytes 00:07:39.547 Memory Page Size Maximum: 65536 bytes 00:07:39.547 Persistent Memory Region: Not Supported 00:07:39.547 Optional Asynchronous Events Supported 00:07:39.547 Namespace Attribute Notices: Supported 00:07:39.547 Firmware Activation Notices: Not Supported 00:07:39.547 ANA Change Notices: Not Supported 00:07:39.547 PLE Aggregate Log Change Notices: Not Supported 00:07:39.547 LBA Status Info Alert Notices: Not Supported 00:07:39.547 EGE Aggregate Log Change Notices: Not Supported 00:07:39.547 Normal NVM Subsystem Shutdown event: Not Supported 00:07:39.547 Zone Descriptor Change Notices: Not Supported 00:07:39.547 Discovery Log Change Notices: Not Supported 00:07:39.547 Controller Attributes 00:07:39.547 128-bit Host Identifier: Not Supported 00:07:39.548 Non-Operational Permissive Mode: Not Supported 00:07:39.548 NVM Sets: Not Supported 00:07:39.548 Read Recovery Levels: Not Supported 00:07:39.548 Endurance Groups: Supported 00:07:39.548 Predictable Latency Mode: Not Supported 00:07:39.548 Traffic Based Keep ALive: Not Supported 00:07:39.548 Namespace Granularity: Not Supported 00:07:39.548 SQ Associations: Not Supported 00:07:39.548 UUID List: Not Supported 00:07:39.548 Multi-Domain Subsystem: Not Supported 00:07:39.548 Fixed Capacity Management: Not Supported 00:07:39.548 Variable Capacity Management: Not Supported 00:07:39.548 Delete Endurance Group: Not Supported 00:07:39.548 Delete NVM Set: Not Supported 00:07:39.548 Extended LBA Formats Supported: Supported 00:07:39.548 Flexible Data Placement Supported: Supported 00:07:39.548 00:07:39.548 Controller Memory Buffer Support 00:07:39.548 ================================ 00:07:39.548 Supported: No 00:07:39.548 00:07:39.548 Persistent Memory Region Support 00:07:39.548 ================================ 00:07:39.548 Supported: No 00:07:39.548 00:07:39.548 Admin Command Set Attributes 00:07:39.548 ============================ 00:07:39.548 Security Send/Receive: Not Supported 00:07:39.548 Format NVM: Supported 00:07:39.548 Firmware Activate/Download: Not Supported 00:07:39.548 Namespace Management: Supported 00:07:39.548 Device Self-Test: Not Supported 00:07:39.548 Directives: Supported 00:07:39.548 NVMe-MI: Not Supported 00:07:39.548 Virtualization Management: Not Supported 00:07:39.548 Doorbell Buffer Config: Supported 00:07:39.548 Get LBA Status Capability: Not Supported 00:07:39.548 Command & Feature Lockdown Capability: Not Supported 00:07:39.548 Abort Command Limit: 4 00:07:39.548 Async Event Request Limit: 4 00:07:39.548 Number of Firmware Slots: N/A 00:07:39.548 Firmware Slot 1 Read-Only: N/A 00:07:39.548 Firmware Activation Without Reset: N/A 00:07:39.548 Multiple Update Detection Support: N/A 00:07:39.548 Firmware Update Granularity: No Information Provided 00:07:39.548 Per-Namespace SMART Log: Yes 00:07:39.548 Asymmetric Namespace Access Log Page: Not Supported 00:07:39.548 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:39.548 Command Effects Log Page: Supported 00:07:39.548 Get Log Page Extended Data: Supported 00:07:39.548 Telemetry Log Pages: Not Supported 00:07:39.548 Persistent Event Log Pages: Not Supported 00:07:39.548 Supported Log Pages Log Page: May Support 00:07:39.548 Commands Supported & Effects Log Page: Not Supported 00:07:39.548 Feature Identifiers & Effects Log Page:May Support 00:07:39.548 NVMe-MI Commands & Effects Log Page: May Support 00:07:39.548 Data Area 4 for Telemetry Log: Not Supported 00:07:39.548 Error Log Page Entries Supported: 1 00:07:39.548 Keep Alive: Not Supported 00:07:39.548 00:07:39.548 NVM Command Set Attributes 00:07:39.548 ========================== 00:07:39.548 Submission Queue Entry Size 00:07:39.548 Max: 64 00:07:39.548 Min: 64 00:07:39.548 Completion Queue Entry Size 00:07:39.548 Max: 16 00:07:39.548 Min: 16 00:07:39.548 Number of Namespaces: 256 00:07:39.548 Compare Command: Supported 00:07:39.548 Write Uncorrectable Command: Not Supported 00:07:39.548 Dataset Management Command: Supported 00:07:39.548 Write Zeroes Command: Supported 00:07:39.548 Set Features Save Field: Supported 00:07:39.548 Reservations: Not Supported 00:07:39.548 Timestamp: Supported 00:07:39.548 Copy: Supported 00:07:39.548 Volatile Write Cache: Present 00:07:39.548 Atomic Write Unit (Normal): 1 00:07:39.548 Atomic Write Unit (PFail): 1 00:07:39.548 Atomic Compare & Write Unit: 1 00:07:39.548 Fused Compare & Write: Not Supported 00:07:39.548 Scatter-Gather List 00:07:39.548 SGL Command Set: Supported 00:07:39.548 SGL Keyed: Not Supported 00:07:39.548 SGL Bit Bucket Descriptor: Not Supported 00:07:39.548 SGL Metadata Pointer: Not Supported 00:07:39.548 Oversized SGL: Not Supported 00:07:39.548 SGL Metadata Address: Not Supported 00:07:39.548 SGL Offset: Not Supported 00:07:39.548 Transport SGL Data Block: Not Supported 00:07:39.548 Replay Protected Memory Block: Not Supported 00:07:39.548 00:07:39.548 Firmware Slot Information 00:07:39.548 ========================= 00:07:39.548 Active slot: 1 00:07:39.548 Slot 1 Firmware Revision: 1.0 00:07:39.548 00:07:39.548 00:07:39.548 Commands Supported and Effects 00:07:39.548 ============================== 00:07:39.548 Admin Commands 00:07:39.548 -------------- 00:07:39.548 Delete I/O Submission Queue (00h): Supported 00:07:39.548 Create I/O Submission Queue (01h): Supported 00:07:39.548 Get Log Page (02h): Supported 00:07:39.548 Delete I/O Completion Queue (04h): Supported 00:07:39.548 Create I/O Completion Queue (05h): Supported 00:07:39.548 Identify (06h): Supported 00:07:39.548 Abort (08h): Supported 00:07:39.548 Set Features (09h): Supported 00:07:39.548 Get Features (0Ah): Supported 00:07:39.548 Asynchronous Event Request (0Ch): Supported 00:07:39.548 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:39.548 Directive Send (19h): Supported 00:07:39.548 Directive Receive (1Ah): Supported 00:07:39.548 Virtualization Management (1Ch): Supported 00:07:39.548 Doorbell Buffer Config (7Ch): Supported 00:07:39.548 Format NVM (80h): Supported LBA-Change 00:07:39.548 I/O Commands 00:07:39.548 ------------ 00:07:39.548 Flush (00h): Supported LBA-Change 00:07:39.548 Write (01h): Supported LBA-Change 00:07:39.548 Read (02h): Supported 00:07:39.548 Compare (05h): Supported 00:07:39.548 Write Zeroes (08h): Supported LBA-Change 00:07:39.548 Dataset Management (09h): Supported LBA-Change 00:07:39.548 Unknown (0Ch): Supported 00:07:39.548 Unknown (12h): Supported 00:07:39.548 Copy (19h): Supported LBA-Change 00:07:39.548 Unknown (1Dh): Supported LBA-Change 00:07:39.548 00:07:39.548 Error Log 00:07:39.548 ========= 00:07:39.548 00:07:39.548 Arbitration 00:07:39.548 =========== 00:07:39.548 Arbitration Burst: no limit 00:07:39.548 00:07:39.548 Power Management 00:07:39.548 ================ 00:07:39.548 Number of Power States: 1 00:07:39.548 Current Power State: Power State #0 00:07:39.548 Power State #0: 00:07:39.548 Max Power: 25.00 W 00:07:39.548 Non-Operational State: Operational 00:07:39.548 Entry Latency: 16 microseconds 00:07:39.548 Exit Latency: 4 microseconds 00:07:39.548 Relative Read Throughput: 0 00:07:39.548 Relative Read Latency: 0 00:07:39.548 Relative Write Throughput: 0 00:07:39.548 Relative Write Latency: 0 00:07:39.548 Idle Power: Not Reported 00:07:39.548 Active Power: Not Reported 00:07:39.548 Non-Operational Permissive Mode: Not Supported 00:07:39.548 00:07:39.548 Health Information 00:07:39.548 ================== 00:07:39.548 Critical Warnings: 00:07:39.548 Available Spare Space: OK 00:07:39.548 Temperature: OK 00:07:39.548 Device Reliability: OK 00:07:39.548 Read Only: No 00:07:39.548 Volatile Memory Backup: OK 00:07:39.548 Current Temperature: 323 Kelvin (50 Celsius) 00:07:39.548 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:39.548 Available Spare: 0% 00:07:39.548 Available Spare Threshold: 0% 00:07:39.548 Life Percentage Used: 0% 00:07:39.548 Data Units Read: 787 00:07:39.548 Data Units Written: 716 00:07:39.548 Host Read Commands: 39284 00:07:39.548 Host Write Commands: 38707 00:07:39.548 Controller Busy Time: 0 minutes 00:07:39.548 Power Cycles: 0 00:07:39.548 Power On Hours: 0 hours 00:07:39.548 Unsafe Shutdowns: 0 00:07:39.548 Unrecoverable Media Errors: 0 00:07:39.548 Lifetime Error Log Entries: 0 00:07:39.548 Warning Temperature Time: 0 minutes 00:07:39.548 Critical Temperature Time: 0 minutes 00:07:39.548 00:07:39.548 Number of Queues 00:07:39.548 ================ 00:07:39.548 Number of I/O Submission Queues: 64 00:07:39.548 Number of I/O Completion Queues: 64 00:07:39.548 00:07:39.548 ZNS Specific Controller Data 00:07:39.548 ============================ 00:07:39.548 Zone Append Size Limit: 0 00:07:39.548 00:07:39.548 00:07:39.548 Active Namespaces 00:07:39.548 ================= 00:07:39.548 Namespace ID:1 00:07:39.548 Error Recovery Timeout: Unlimited 00:07:39.548 Command Set Identifier: NVM (00h) 00:07:39.548 Deallocate: Supported 00:07:39.548 Deallocated/Unwritten Error: Supported 00:07:39.548 Deallocated Read Value: All 0x00 00:07:39.548 Deallocate in Write Zeroes: Not Supported 00:07:39.548 Deallocated Guard Field: 0xFFFF 00:07:39.548 Flush: Supported 00:07:39.548 Reservation: Not Supported 00:07:39.548 Namespace Sharing Capabilities: Multiple Controllers 00:07:39.548 Size (in LBAs): 262144 (1GiB) 00:07:39.548 Capacity (in LBAs): 262144 (1GiB) 00:07:39.548 Utilization (in LBAs): 262144 (1GiB) 00:07:39.548 Thin Provisioning: Not Supported 00:07:39.549 Per-NS Atomic Units: No 00:07:39.549 Maximum Single Source Range Length: 128 00:07:39.549 Maximum Copy Length: 128 00:07:39.549 Maximum Source Range Count: 128 00:07:39.549 NGUID/EUI64 Never Reused: No 00:07:39.549 Namespace Write Protected: No 00:07:39.549 Endurance group ID: 1 00:07:39.549 Number of LBA Formats: 8 00:07:39.549 Current LBA Format: LBA Format #04 00:07:39.549 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:39.549 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:39.549 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:39.549 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:39.549 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:39.549 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:39.549 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:39.549 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:39.549 00:07:39.549 Get Feature FDP: 00:07:39.549 ================ 00:07:39.549 Enabled: Yes 00:07:39.549 FDP configuration index: 0 00:07:39.549 00:07:39.549 FDP configurations log page 00:07:39.549 =========================== 00:07:39.549 Number of FDP configurations: 1 00:07:39.549 Version: 0 00:07:39.549 Size: 112 00:07:39.549 FDP Configuration Descriptor: 0 00:07:39.549 Descriptor Size: 96 00:07:39.549 Reclaim Group Identifier format: 2 00:07:39.549 FDP Volatile Write Cache: Not Present 00:07:39.549 FDP Configuration: Valid 00:07:39.549 Vendor Specific Size: 0 00:07:39.549 Number of Reclaim Groups: 2 00:07:39.549 Number of Recalim Unit Handles: 8 00:07:39.549 Max Placement Identifiers: 128 00:07:39.549 Number of Namespaces Suppprted: 256 00:07:39.549 Reclaim unit Nominal Size: 6000000 bytes 00:07:39.549 Estimated Reclaim Unit Time Limit: Not Reported 00:07:39.549 RUH Desc #000: RUH Type: Initially Isolated 00:07:39.549 RUH Desc #001: RUH Type: Initially Isolated 00:07:39.549 RUH Desc #002: RUH Type: Initially Isolated 00:07:39.549 RUH Desc #003: RUH Type: Initially Isolated 00:07:39.549 RUH Desc #004: RUH Type: Initially Isolated 00:07:39.549 RUH Desc #005: RUH Type: Initially Isolated 00:07:39.549 RUH Desc #006: RUH Type: Initially Isolated 00:07:39.549 RUH Desc #007: RUH Type: Initially Isolated 00:07:39.549 00:07:39.549 FDP reclaim unit handle usage log page 00:07:39.549 ====================================== 00:07:39.549 Number of Reclaim Unit Handles: 8 00:07:39.549 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:39.549 RUH Usage Desc #001: RUH Attributes: Unused 00:07:39.549 RUH Usage Desc #002: RUH Attributes: Unused 00:07:39.549 RUH Usage Desc #003: RUH Attributes: Unused 00:07:39.549 RUH Usage Desc #004: RUH Attributes: Unused 00:07:39.549 RUH Usage Desc #005: RUH Attributes: Unused 00:07:39.549 RUH Usage Desc #006: RUH Attributes: Unused 00:07:39.549 RUH Usage Desc #007: RUH Attributes: Unused 00:07:39.549 00:07:39.549 FDP statistics log page 00:07:39.549 ======================= 00:07:39.549 Host bytes with metadata written: 466395136 00:07:39.549 Media bytes with metadata written: 466448384 00:07:39.549 Media bytes erased: 0 00:07:39.549 00:07:39.549 FDP events log page 00:07:39.549 =================== 00:07:39.549 Number of FDP events: 0 00:07:39.549 00:07:39.549 NVM Specific Namespace Data 00:07:39.549 =========================== 00:07:39.549 Logical Block Storage Tag Mask: 0 00:07:39.549 Protection Information Capabilities: 00:07:39.549 16b Guard Protection Information Storage Tag Support: No 00:07:39.549 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:39.549 Storage Tag Check Read Support: No 00:07:39.549 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.549 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.549 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.549 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.549 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.549 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.549 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.549 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:39.549 00:07:39.549 real 0m1.213s 00:07:39.549 user 0m0.459s 00:07:39.549 sys 0m0.515s 00:07:39.549 ************************************ 00:07:39.549 END TEST nvme_identify 00:07:39.549 01:33:23 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.549 01:33:23 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:39.549 ************************************ 00:07:39.549 01:33:23 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:39.549 01:33:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.549 01:33:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.549 01:33:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.549 ************************************ 00:07:39.549 START TEST nvme_perf 00:07:39.549 ************************************ 00:07:39.549 01:33:23 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:39.549 01:33:23 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:40.927 Initializing NVMe Controllers 00:07:40.927 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:40.927 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:40.927 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:40.927 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:40.927 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:40.927 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:40.927 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:40.927 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:40.927 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:40.927 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:40.927 Initialization complete. Launching workers. 00:07:40.927 ======================================================== 00:07:40.927 Latency(us) 00:07:40.927 Device Information : IOPS MiB/s Average min max 00:07:40.927 PCIE (0000:00:11.0) NSID 1 from core 0: 10874.60 127.44 11788.50 8677.81 39070.35 00:07:40.927 PCIE (0000:00:13.0) NSID 1 from core 0: 10874.60 127.44 11771.98 8826.60 37833.91 00:07:40.927 PCIE (0000:00:10.0) NSID 1 from core 0: 10874.60 127.44 11753.03 8949.38 36862.89 00:07:40.927 PCIE (0000:00:12.0) NSID 1 from core 0: 10874.60 127.44 11735.83 9235.72 35581.72 00:07:40.927 PCIE (0000:00:12.0) NSID 2 from core 0: 10874.60 127.44 11717.23 8329.95 34994.02 00:07:40.927 PCIE (0000:00:12.0) NSID 3 from core 0: 10938.56 128.19 11630.80 8534.16 26789.62 00:07:40.927 ======================================================== 00:07:40.927 Total : 65311.54 765.37 11732.79 8329.95 39070.35 00:07:40.927 00:07:40.927 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:40.927 ================================================================================= 00:07:40.927 1.00000% : 9376.689us 00:07:40.927 10.00000% : 9981.637us 00:07:40.927 25.00000% : 10334.523us 00:07:40.927 50.00000% : 10889.058us 00:07:40.927 75.00000% : 12048.542us 00:07:40.927 90.00000% : 14922.043us 00:07:40.927 95.00000% : 16131.938us 00:07:40.927 98.00000% : 17442.658us 00:07:40.927 99.00000% : 28029.243us 00:07:40.927 99.50000% : 37910.055us 00:07:40.928 99.90000% : 38918.302us 00:07:40.928 99.99000% : 39119.951us 00:07:40.928 99.99900% : 39119.951us 00:07:40.928 99.99990% : 39119.951us 00:07:40.928 99.99999% : 39119.951us 00:07:40.928 00:07:40.928 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:40.928 ================================================================================= 00:07:40.928 1.00000% : 9376.689us 00:07:40.928 10.00000% : 9981.637us 00:07:40.928 25.00000% : 10334.523us 00:07:40.928 50.00000% : 10889.058us 00:07:40.928 75.00000% : 12048.542us 00:07:40.928 90.00000% : 14922.043us 00:07:40.928 95.00000% : 16131.938us 00:07:40.928 98.00000% : 17241.009us 00:07:40.928 99.00000% : 26819.348us 00:07:40.928 99.50000% : 36700.160us 00:07:40.928 99.90000% : 37708.406us 00:07:40.928 99.99000% : 37910.055us 00:07:40.928 99.99900% : 37910.055us 00:07:40.928 99.99990% : 37910.055us 00:07:40.928 99.99999% : 37910.055us 00:07:40.928 00:07:40.928 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:40.928 ================================================================================= 00:07:40.928 1.00000% : 9427.102us 00:07:40.928 10.00000% : 9931.225us 00:07:40.928 25.00000% : 10334.523us 00:07:40.928 50.00000% : 10838.646us 00:07:40.928 75.00000% : 12098.954us 00:07:40.928 90.00000% : 15022.868us 00:07:40.928 95.00000% : 16131.938us 00:07:40.928 98.00000% : 17442.658us 00:07:40.928 99.00000% : 25811.102us 00:07:40.928 99.50000% : 35490.265us 00:07:40.928 99.90000% : 36700.160us 00:07:40.928 99.99000% : 36901.809us 00:07:40.928 99.99900% : 36901.809us 00:07:40.928 99.99990% : 36901.809us 00:07:40.928 99.99999% : 36901.809us 00:07:40.928 00:07:40.928 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:40.928 ================================================================================= 00:07:40.928 1.00000% : 9578.338us 00:07:40.928 10.00000% : 10032.049us 00:07:40.928 25.00000% : 10334.523us 00:07:40.928 50.00000% : 10838.646us 00:07:40.928 75.00000% : 12098.954us 00:07:40.928 90.00000% : 15022.868us 00:07:40.928 95.00000% : 15930.289us 00:07:40.928 98.00000% : 17241.009us 00:07:40.928 99.00000% : 25105.329us 00:07:40.928 99.50000% : 34280.369us 00:07:40.928 99.90000% : 35490.265us 00:07:40.928 99.99000% : 35691.914us 00:07:40.928 99.99900% : 35691.914us 00:07:40.928 99.99990% : 35691.914us 00:07:40.928 99.99999% : 35691.914us 00:07:40.928 00:07:40.928 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:40.928 ================================================================================= 00:07:40.928 1.00000% : 9275.865us 00:07:40.928 10.00000% : 9931.225us 00:07:40.928 25.00000% : 10284.111us 00:07:40.928 50.00000% : 10838.646us 00:07:40.928 75.00000% : 12149.366us 00:07:40.928 90.00000% : 14922.043us 00:07:40.928 95.00000% : 16232.763us 00:07:40.928 98.00000% : 17241.009us 00:07:40.928 99.00000% : 25811.102us 00:07:40.928 99.50000% : 33675.422us 00:07:40.928 99.90000% : 34885.317us 00:07:40.928 99.99000% : 35086.966us 00:07:40.928 99.99900% : 35086.966us 00:07:40.928 99.99990% : 35086.966us 00:07:40.928 99.99999% : 35086.966us 00:07:40.928 00:07:40.928 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:40.928 ================================================================================= 00:07:40.928 1.00000% : 9326.277us 00:07:40.928 10.00000% : 9931.225us 00:07:40.928 25.00000% : 10284.111us 00:07:40.928 50.00000% : 10838.646us 00:07:40.928 75.00000% : 12149.366us 00:07:40.928 90.00000% : 14922.043us 00:07:40.928 95.00000% : 16131.938us 00:07:40.928 98.00000% : 17341.834us 00:07:40.928 99.00000% : 17845.957us 00:07:40.928 99.50000% : 25508.628us 00:07:40.928 99.90000% : 26617.698us 00:07:40.928 99.99000% : 26819.348us 00:07:40.928 99.99900% : 26819.348us 00:07:40.928 99.99990% : 26819.348us 00:07:40.928 99.99999% : 26819.348us 00:07:40.928 00:07:40.928 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:40.928 ============================================================================== 00:07:40.928 Range in us Cumulative IO count 00:07:40.928 8670.917 - 8721.329: 0.0827% ( 9) 00:07:40.928 8721.329 - 8771.742: 0.1287% ( 5) 00:07:40.928 8771.742 - 8822.154: 0.1930% ( 7) 00:07:40.928 8822.154 - 8872.566: 0.2114% ( 2) 00:07:40.928 8872.566 - 8922.978: 0.2298% ( 2) 00:07:40.928 8922.978 - 8973.391: 0.2482% ( 2) 00:07:40.928 8973.391 - 9023.803: 0.2757% ( 3) 00:07:40.928 9023.803 - 9074.215: 0.2941% ( 2) 00:07:40.928 9074.215 - 9124.628: 0.3309% ( 4) 00:07:40.928 9124.628 - 9175.040: 0.4596% ( 14) 00:07:40.928 9175.040 - 9225.452: 0.5699% ( 12) 00:07:40.928 9225.452 - 9275.865: 0.7261% ( 17) 00:07:40.928 9275.865 - 9326.277: 0.9099% ( 20) 00:07:40.928 9326.277 - 9376.689: 1.1213% ( 23) 00:07:40.928 9376.689 - 9427.102: 1.3511% ( 25) 00:07:40.928 9427.102 - 9477.514: 1.5993% ( 27) 00:07:40.928 9477.514 - 9527.926: 1.9485% ( 38) 00:07:40.928 9527.926 - 9578.338: 2.3713% ( 46) 00:07:40.928 9578.338 - 9628.751: 2.8217% ( 49) 00:07:40.928 9628.751 - 9679.163: 3.5018% ( 74) 00:07:40.928 9679.163 - 9729.575: 4.4945% ( 108) 00:07:40.928 9729.575 - 9779.988: 5.4412% ( 103) 00:07:40.928 9779.988 - 9830.400: 6.5165% ( 117) 00:07:40.928 9830.400 - 9880.812: 7.8676% ( 147) 00:07:40.928 9880.812 - 9931.225: 9.2555% ( 151) 00:07:40.928 9931.225 - 9981.637: 10.9743% ( 187) 00:07:40.928 9981.637 - 10032.049: 12.7206% ( 190) 00:07:40.928 10032.049 - 10082.462: 14.8162% ( 228) 00:07:40.928 10082.462 - 10132.874: 17.0037% ( 238) 00:07:40.928 10132.874 - 10183.286: 19.1728% ( 236) 00:07:40.928 10183.286 - 10233.698: 21.4522% ( 248) 00:07:40.928 10233.698 - 10284.111: 23.8051% ( 256) 00:07:40.928 10284.111 - 10334.523: 26.4614% ( 289) 00:07:40.928 10334.523 - 10384.935: 29.1268% ( 290) 00:07:40.928 10384.935 - 10435.348: 31.8382% ( 295) 00:07:40.928 10435.348 - 10485.760: 34.6324% ( 304) 00:07:40.928 10485.760 - 10536.172: 37.0129% ( 259) 00:07:40.928 10536.172 - 10586.585: 39.3199% ( 251) 00:07:40.929 10586.585 - 10636.997: 41.6544% ( 254) 00:07:40.929 10636.997 - 10687.409: 43.8511% ( 239) 00:07:40.929 10687.409 - 10737.822: 45.9099% ( 224) 00:07:40.929 10737.822 - 10788.234: 47.8585% ( 212) 00:07:40.929 10788.234 - 10838.646: 49.8162% ( 213) 00:07:40.929 10838.646 - 10889.058: 51.5901% ( 193) 00:07:40.929 10889.058 - 10939.471: 53.4743% ( 205) 00:07:40.929 10939.471 - 10989.883: 55.2482% ( 193) 00:07:40.929 10989.883 - 11040.295: 56.8107% ( 170) 00:07:40.929 11040.295 - 11090.708: 58.2721% ( 159) 00:07:40.929 11090.708 - 11141.120: 59.5772% ( 142) 00:07:40.929 11141.120 - 11191.532: 61.0938% ( 165) 00:07:40.929 11191.532 - 11241.945: 62.4632% ( 149) 00:07:40.929 11241.945 - 11292.357: 63.7132% ( 136) 00:07:40.929 11292.357 - 11342.769: 64.9081% ( 130) 00:07:40.929 11342.769 - 11393.182: 65.9467% ( 113) 00:07:40.929 11393.182 - 11443.594: 66.9485% ( 109) 00:07:40.929 11443.594 - 11494.006: 67.9320% ( 107) 00:07:40.929 11494.006 - 11544.418: 68.8327% ( 98) 00:07:40.929 11544.418 - 11594.831: 69.6783% ( 92) 00:07:40.929 11594.831 - 11645.243: 70.5147% ( 91) 00:07:40.929 11645.243 - 11695.655: 71.3143% ( 87) 00:07:40.929 11695.655 - 11746.068: 72.0037% ( 75) 00:07:40.929 11746.068 - 11796.480: 72.6103% ( 66) 00:07:40.929 11796.480 - 11846.892: 73.2629% ( 71) 00:07:40.929 11846.892 - 11897.305: 73.8879% ( 68) 00:07:40.929 11897.305 - 11947.717: 74.3934% ( 55) 00:07:40.929 11947.717 - 11998.129: 74.8897% ( 54) 00:07:40.929 11998.129 - 12048.542: 75.3585% ( 51) 00:07:40.929 12048.542 - 12098.954: 75.8088% ( 49) 00:07:40.929 12098.954 - 12149.366: 76.1213% ( 34) 00:07:40.929 12149.366 - 12199.778: 76.3511% ( 25) 00:07:40.929 12199.778 - 12250.191: 76.6176% ( 29) 00:07:40.929 12250.191 - 12300.603: 76.9853% ( 40) 00:07:40.929 12300.603 - 12351.015: 77.3438% ( 39) 00:07:40.929 12351.015 - 12401.428: 77.5735% ( 25) 00:07:40.929 12401.428 - 12451.840: 77.8401% ( 29) 00:07:40.929 12451.840 - 12502.252: 78.0790% ( 26) 00:07:40.929 12502.252 - 12552.665: 78.2812% ( 22) 00:07:40.929 12552.665 - 12603.077: 78.4467% ( 18) 00:07:40.929 12603.077 - 12653.489: 78.5754% ( 14) 00:07:40.929 12653.489 - 12703.902: 78.7224% ( 16) 00:07:40.929 12703.902 - 12754.314: 78.8879% ( 18) 00:07:40.929 12754.314 - 12804.726: 79.0901% ( 22) 00:07:40.929 12804.726 - 12855.138: 79.2923% ( 22) 00:07:40.929 12855.138 - 12905.551: 79.4945% ( 22) 00:07:40.929 12905.551 - 13006.375: 79.9908% ( 54) 00:07:40.929 13006.375 - 13107.200: 80.5239% ( 58) 00:07:40.929 13107.200 - 13208.025: 80.9191% ( 43) 00:07:40.929 13208.025 - 13308.849: 81.3419% ( 46) 00:07:40.929 13308.849 - 13409.674: 81.7463% ( 44) 00:07:40.929 13409.674 - 13510.498: 82.2610% ( 56) 00:07:40.929 13510.498 - 13611.323: 82.6930% ( 47) 00:07:40.929 13611.323 - 13712.148: 83.2537% ( 61) 00:07:40.929 13712.148 - 13812.972: 83.8511% ( 65) 00:07:40.929 13812.972 - 13913.797: 84.5221% ( 73) 00:07:40.929 13913.797 - 14014.622: 85.2298% ( 77) 00:07:40.929 14014.622 - 14115.446: 85.9191% ( 75) 00:07:40.929 14115.446 - 14216.271: 86.4982% ( 63) 00:07:40.929 14216.271 - 14317.095: 87.1691% ( 73) 00:07:40.929 14317.095 - 14417.920: 87.8768% ( 77) 00:07:40.929 14417.920 - 14518.745: 88.4743% ( 65) 00:07:40.929 14518.745 - 14619.569: 89.0074% ( 58) 00:07:40.929 14619.569 - 14720.394: 89.4393% ( 47) 00:07:40.929 14720.394 - 14821.218: 89.8529% ( 45) 00:07:40.929 14821.218 - 14922.043: 90.2206% ( 40) 00:07:40.929 14922.043 - 15022.868: 90.5515% ( 36) 00:07:40.929 15022.868 - 15123.692: 90.9835% ( 47) 00:07:40.929 15123.692 - 15224.517: 91.4522% ( 51) 00:07:40.929 15224.517 - 15325.342: 91.9301% ( 52) 00:07:40.929 15325.342 - 15426.166: 92.4632% ( 58) 00:07:40.929 15426.166 - 15526.991: 92.9044% ( 48) 00:07:40.929 15526.991 - 15627.815: 93.4099% ( 55) 00:07:40.929 15627.815 - 15728.640: 93.8327% ( 46) 00:07:40.929 15728.640 - 15829.465: 94.1820% ( 38) 00:07:40.929 15829.465 - 15930.289: 94.5221% ( 37) 00:07:40.929 15930.289 - 16031.114: 94.8621% ( 37) 00:07:40.929 16031.114 - 16131.938: 95.0643% ( 22) 00:07:40.929 16131.938 - 16232.763: 95.1746% ( 12) 00:07:40.929 16232.763 - 16333.588: 95.2941% ( 13) 00:07:40.929 16333.588 - 16434.412: 95.5055% ( 23) 00:07:40.929 16434.412 - 16535.237: 95.7629% ( 28) 00:07:40.929 16535.237 - 16636.062: 96.0478% ( 31) 00:07:40.929 16636.062 - 16736.886: 96.2776% ( 25) 00:07:40.929 16736.886 - 16837.711: 96.5074% ( 25) 00:07:40.929 16837.711 - 16938.535: 96.6636% ( 17) 00:07:40.929 16938.535 - 17039.360: 96.8474% ( 20) 00:07:40.929 17039.360 - 17140.185: 97.1324% ( 31) 00:07:40.929 17140.185 - 17241.009: 97.4265% ( 32) 00:07:40.929 17241.009 - 17341.834: 97.7665% ( 37) 00:07:40.929 17341.834 - 17442.658: 98.0882% ( 35) 00:07:40.929 17442.658 - 17543.483: 98.2812% ( 21) 00:07:40.929 17543.483 - 17644.308: 98.3824% ( 11) 00:07:40.929 17644.308 - 17745.132: 98.4559% ( 8) 00:07:40.929 17745.132 - 17845.957: 98.5570% ( 11) 00:07:40.929 17845.957 - 17946.782: 98.6489% ( 10) 00:07:40.929 17946.782 - 18047.606: 98.7592% ( 12) 00:07:40.929 18047.606 - 18148.431: 98.8143% ( 6) 00:07:40.929 18148.431 - 18249.255: 98.8235% ( 1) 00:07:40.929 27222.646 - 27424.295: 98.8419% ( 2) 00:07:40.929 27424.295 - 27625.945: 98.9246% ( 9) 00:07:40.929 27625.945 - 27827.594: 98.9982% ( 8) 00:07:40.929 27827.594 - 28029.243: 99.0165% ( 2) 00:07:40.929 28432.542 - 28634.191: 99.0901% ( 8) 00:07:40.929 28634.191 - 28835.840: 99.1636% ( 8) 00:07:40.929 28835.840 - 29037.489: 99.2371% ( 8) 00:07:40.929 29037.489 - 29239.138: 99.3199% ( 9) 00:07:40.929 29239.138 - 29440.788: 99.4026% ( 9) 00:07:40.929 29440.788 - 29642.437: 99.4118% ( 1) 00:07:40.929 37506.757 - 37708.406: 99.4577% ( 5) 00:07:40.929 37708.406 - 37910.055: 99.5404% ( 9) 00:07:40.929 37910.055 - 38111.705: 99.6232% ( 9) 00:07:40.929 38111.705 - 38313.354: 99.6967% ( 8) 00:07:40.929 38313.354 - 38515.003: 99.7794% ( 9) 00:07:40.929 38515.003 - 38716.652: 99.8621% ( 9) 00:07:40.929 38716.652 - 38918.302: 99.9357% ( 8) 00:07:40.929 38918.302 - 39119.951: 100.0000% ( 7) 00:07:40.929 00:07:40.929 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:40.929 ============================================================================== 00:07:40.929 Range in us Cumulative IO count 00:07:40.929 8822.154 - 8872.566: 0.0184% ( 2) 00:07:40.930 8872.566 - 8922.978: 0.0551% ( 4) 00:07:40.930 8922.978 - 8973.391: 0.1471% ( 10) 00:07:40.930 8973.391 - 9023.803: 0.2298% ( 9) 00:07:40.930 9023.803 - 9074.215: 0.3309% ( 11) 00:07:40.930 9074.215 - 9124.628: 0.3952% ( 7) 00:07:40.930 9124.628 - 9175.040: 0.4779% ( 9) 00:07:40.930 9175.040 - 9225.452: 0.6158% ( 15) 00:07:40.930 9225.452 - 9275.865: 0.7721% ( 17) 00:07:40.930 9275.865 - 9326.277: 0.9099% ( 15) 00:07:40.930 9326.277 - 9376.689: 1.1305% ( 24) 00:07:40.930 9376.689 - 9427.102: 1.3327% ( 22) 00:07:40.930 9427.102 - 9477.514: 1.5349% ( 22) 00:07:40.930 9477.514 - 9527.926: 1.8382% ( 33) 00:07:40.930 9527.926 - 9578.338: 2.2886% ( 49) 00:07:40.930 9578.338 - 9628.751: 2.9136% ( 68) 00:07:40.930 9628.751 - 9679.163: 3.5938% ( 74) 00:07:40.930 9679.163 - 9729.575: 4.4210% ( 90) 00:07:40.930 9729.575 - 9779.988: 5.5239% ( 120) 00:07:40.930 9779.988 - 9830.400: 6.7555% ( 134) 00:07:40.930 9830.400 - 9880.812: 8.1710% ( 154) 00:07:40.930 9880.812 - 9931.225: 9.6048% ( 156) 00:07:40.930 9931.225 - 9981.637: 11.2316% ( 177) 00:07:40.930 9981.637 - 10032.049: 12.8768% ( 179) 00:07:40.930 10032.049 - 10082.462: 14.8070% ( 210) 00:07:40.930 10082.462 - 10132.874: 17.0221% ( 241) 00:07:40.930 10132.874 - 10183.286: 19.1912% ( 236) 00:07:40.930 10183.286 - 10233.698: 21.5074% ( 252) 00:07:40.930 10233.698 - 10284.111: 24.0533% ( 277) 00:07:40.930 10284.111 - 10334.523: 26.3603% ( 251) 00:07:40.930 10334.523 - 10384.935: 28.8235% ( 268) 00:07:40.930 10384.935 - 10435.348: 31.8934% ( 334) 00:07:40.930 10435.348 - 10485.760: 34.5037% ( 284) 00:07:40.930 10485.760 - 10536.172: 37.0496% ( 277) 00:07:40.930 10536.172 - 10586.585: 39.5129% ( 268) 00:07:40.930 10586.585 - 10636.997: 41.8015% ( 249) 00:07:40.930 10636.997 - 10687.409: 43.9062% ( 229) 00:07:40.930 10687.409 - 10737.822: 45.9651% ( 224) 00:07:40.930 10737.822 - 10788.234: 47.9412% ( 215) 00:07:40.930 10788.234 - 10838.646: 49.8805% ( 211) 00:07:40.930 10838.646 - 10889.058: 51.7831% ( 207) 00:07:40.930 10889.058 - 10939.471: 53.5938% ( 197) 00:07:40.930 10939.471 - 10989.883: 55.3125% ( 187) 00:07:40.930 10989.883 - 11040.295: 56.9945% ( 183) 00:07:40.930 11040.295 - 11090.708: 58.6765% ( 183) 00:07:40.930 11090.708 - 11141.120: 60.1930% ( 165) 00:07:40.930 11141.120 - 11191.532: 61.5441% ( 147) 00:07:40.930 11191.532 - 11241.945: 62.7574% ( 132) 00:07:40.930 11241.945 - 11292.357: 63.9154% ( 126) 00:07:40.930 11292.357 - 11342.769: 64.9540% ( 113) 00:07:40.930 11342.769 - 11393.182: 65.9283% ( 106) 00:07:40.930 11393.182 - 11443.594: 66.9210% ( 108) 00:07:40.930 11443.594 - 11494.006: 67.9044% ( 107) 00:07:40.930 11494.006 - 11544.418: 68.7684% ( 94) 00:07:40.930 11544.418 - 11594.831: 69.5221% ( 82) 00:07:40.930 11594.831 - 11645.243: 70.2482% ( 79) 00:07:40.930 11645.243 - 11695.655: 70.9743% ( 79) 00:07:40.930 11695.655 - 11746.068: 71.6728% ( 76) 00:07:40.930 11746.068 - 11796.480: 72.4265% ( 82) 00:07:40.930 11796.480 - 11846.892: 73.0239% ( 65) 00:07:40.930 11846.892 - 11897.305: 73.6397% ( 67) 00:07:40.930 11897.305 - 11947.717: 74.1912% ( 60) 00:07:40.930 11947.717 - 11998.129: 74.6875% ( 54) 00:07:40.930 11998.129 - 12048.542: 75.0919% ( 44) 00:07:40.930 12048.542 - 12098.954: 75.5147% ( 46) 00:07:40.930 12098.954 - 12149.366: 75.9099% ( 43) 00:07:40.930 12149.366 - 12199.778: 76.3051% ( 43) 00:07:40.930 12199.778 - 12250.191: 76.6912% ( 42) 00:07:40.930 12250.191 - 12300.603: 76.9761% ( 31) 00:07:40.930 12300.603 - 12351.015: 77.2426% ( 29) 00:07:40.930 12351.015 - 12401.428: 77.4724% ( 25) 00:07:40.930 12401.428 - 12451.840: 77.7022% ( 25) 00:07:40.930 12451.840 - 12502.252: 77.9320% ( 25) 00:07:40.930 12502.252 - 12552.665: 78.1342% ( 22) 00:07:40.930 12552.665 - 12603.077: 78.3364% ( 22) 00:07:40.930 12603.077 - 12653.489: 78.5202% ( 20) 00:07:40.930 12653.489 - 12703.902: 78.7040% ( 20) 00:07:40.930 12703.902 - 12754.314: 78.8603% ( 17) 00:07:40.930 12754.314 - 12804.726: 79.0993% ( 26) 00:07:40.930 12804.726 - 12855.138: 79.3566% ( 28) 00:07:40.930 12855.138 - 12905.551: 79.5772% ( 24) 00:07:40.930 12905.551 - 13006.375: 80.1287% ( 60) 00:07:40.930 13006.375 - 13107.200: 80.5055% ( 41) 00:07:40.930 13107.200 - 13208.025: 80.8915% ( 42) 00:07:40.930 13208.025 - 13308.849: 81.3235% ( 47) 00:07:40.930 13308.849 - 13409.674: 81.7555% ( 47) 00:07:40.930 13409.674 - 13510.498: 82.1783% ( 46) 00:07:40.930 13510.498 - 13611.323: 82.6562% ( 52) 00:07:40.930 13611.323 - 13712.148: 83.1710% ( 56) 00:07:40.930 13712.148 - 13812.972: 83.6397% ( 51) 00:07:40.930 13812.972 - 13913.797: 84.2555% ( 67) 00:07:40.930 13913.797 - 14014.622: 84.8070% ( 60) 00:07:40.930 14014.622 - 14115.446: 85.3401% ( 58) 00:07:40.930 14115.446 - 14216.271: 85.9283% ( 64) 00:07:40.930 14216.271 - 14317.095: 86.5993% ( 73) 00:07:40.930 14317.095 - 14417.920: 87.2518% ( 71) 00:07:40.930 14417.920 - 14518.745: 87.8860% ( 69) 00:07:40.930 14518.745 - 14619.569: 88.5846% ( 76) 00:07:40.930 14619.569 - 14720.394: 89.1912% ( 66) 00:07:40.930 14720.394 - 14821.218: 89.7702% ( 63) 00:07:40.930 14821.218 - 14922.043: 90.2114% ( 48) 00:07:40.930 14922.043 - 15022.868: 90.5882% ( 41) 00:07:40.930 15022.868 - 15123.692: 91.0294% ( 48) 00:07:40.930 15123.692 - 15224.517: 91.4614% ( 47) 00:07:40.930 15224.517 - 15325.342: 91.9118% ( 49) 00:07:40.930 15325.342 - 15426.166: 92.3989% ( 53) 00:07:40.930 15426.166 - 15526.991: 92.8401% ( 48) 00:07:40.930 15526.991 - 15627.815: 93.2077% ( 40) 00:07:40.930 15627.815 - 15728.640: 93.4835% ( 30) 00:07:40.930 15728.640 - 15829.465: 93.7868% ( 33) 00:07:40.930 15829.465 - 15930.289: 94.1728% ( 42) 00:07:40.930 15930.289 - 16031.114: 94.6232% ( 49) 00:07:40.930 16031.114 - 16131.938: 95.0000% ( 41) 00:07:40.930 16131.938 - 16232.763: 95.3217% ( 35) 00:07:40.930 16232.763 - 16333.588: 95.5974% ( 30) 00:07:40.930 16333.588 - 16434.412: 95.8915% ( 32) 00:07:40.930 16434.412 - 16535.237: 96.2224% ( 36) 00:07:40.930 16535.237 - 16636.062: 96.5074% ( 31) 00:07:40.930 16636.062 - 16736.886: 96.7831% ( 30) 00:07:40.930 16736.886 - 16837.711: 97.0496% ( 29) 00:07:40.930 16837.711 - 16938.535: 97.3070% ( 28) 00:07:40.930 16938.535 - 17039.360: 97.5276% ( 24) 00:07:40.930 17039.360 - 17140.185: 97.7849% ( 28) 00:07:40.931 17140.185 - 17241.009: 98.0515% ( 29) 00:07:40.931 17241.009 - 17341.834: 98.2445% ( 21) 00:07:40.931 17341.834 - 17442.658: 98.3456% ( 11) 00:07:40.931 17442.658 - 17543.483: 98.4467% ( 11) 00:07:40.931 17543.483 - 17644.308: 98.5570% ( 12) 00:07:40.931 17644.308 - 17745.132: 98.6673% ( 12) 00:07:40.931 17745.132 - 17845.957: 98.7316% ( 7) 00:07:40.931 17845.957 - 17946.782: 98.7868% ( 6) 00:07:40.931 17946.782 - 18047.606: 98.8235% ( 4) 00:07:40.931 26214.400 - 26416.049: 98.8695% ( 5) 00:07:40.931 26416.049 - 26617.698: 98.9522% ( 9) 00:07:40.931 26617.698 - 26819.348: 99.0257% ( 8) 00:07:40.931 26819.348 - 27020.997: 99.1085% ( 9) 00:07:40.931 27020.997 - 27222.646: 99.1820% ( 8) 00:07:40.931 27222.646 - 27424.295: 99.2555% ( 8) 00:07:40.931 27424.295 - 27625.945: 99.3382% ( 9) 00:07:40.931 27625.945 - 27827.594: 99.4118% ( 8) 00:07:40.931 36296.862 - 36498.511: 99.4761% ( 7) 00:07:40.931 36498.511 - 36700.160: 99.5496% ( 8) 00:07:40.931 36700.160 - 36901.809: 99.6324% ( 9) 00:07:40.931 36901.809 - 37103.458: 99.7059% ( 8) 00:07:40.931 37103.458 - 37305.108: 99.7886% ( 9) 00:07:40.931 37305.108 - 37506.757: 99.8621% ( 8) 00:07:40.931 37506.757 - 37708.406: 99.9449% ( 9) 00:07:40.931 37708.406 - 37910.055: 100.0000% ( 6) 00:07:40.931 00:07:40.931 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:40.931 ============================================================================== 00:07:40.931 Range in us Cumulative IO count 00:07:40.931 8922.978 - 8973.391: 0.0184% ( 2) 00:07:40.931 8973.391 - 9023.803: 0.0643% ( 5) 00:07:40.931 9023.803 - 9074.215: 0.1011% ( 4) 00:07:40.931 9074.215 - 9124.628: 0.1195% ( 2) 00:07:40.931 9124.628 - 9175.040: 0.1562% ( 4) 00:07:40.931 9175.040 - 9225.452: 0.3768% ( 24) 00:07:40.931 9225.452 - 9275.865: 0.5423% ( 18) 00:07:40.931 9275.865 - 9326.277: 0.7169% ( 19) 00:07:40.931 9326.277 - 9376.689: 0.9926% ( 30) 00:07:40.931 9376.689 - 9427.102: 1.2224% ( 25) 00:07:40.931 9427.102 - 9477.514: 1.5074% ( 31) 00:07:40.931 9477.514 - 9527.926: 1.8566% ( 38) 00:07:40.931 9527.926 - 9578.338: 2.4449% ( 64) 00:07:40.931 9578.338 - 9628.751: 3.0974% ( 71) 00:07:40.931 9628.751 - 9679.163: 3.9154% ( 89) 00:07:40.931 9679.163 - 9729.575: 4.9449% ( 112) 00:07:40.931 9729.575 - 9779.988: 6.0754% ( 123) 00:07:40.931 9779.988 - 9830.400: 7.1783% ( 120) 00:07:40.931 9830.400 - 9880.812: 8.3732% ( 130) 00:07:40.931 9880.812 - 9931.225: 10.0551% ( 183) 00:07:40.931 9931.225 - 9981.637: 11.8199% ( 192) 00:07:40.931 9981.637 - 10032.049: 13.6949% ( 204) 00:07:40.931 10032.049 - 10082.462: 15.7996% ( 229) 00:07:40.931 10082.462 - 10132.874: 17.9320% ( 232) 00:07:40.931 10132.874 - 10183.286: 20.0551% ( 231) 00:07:40.931 10183.286 - 10233.698: 22.1140% ( 224) 00:07:40.931 10233.698 - 10284.111: 24.3382% ( 242) 00:07:40.931 10284.111 - 10334.523: 26.6452% ( 251) 00:07:40.931 10334.523 - 10384.935: 29.2371% ( 282) 00:07:40.931 10384.935 - 10435.348: 31.7647% ( 275) 00:07:40.931 10435.348 - 10485.760: 34.3474% ( 281) 00:07:40.931 10485.760 - 10536.172: 36.8658% ( 274) 00:07:40.931 10536.172 - 10586.585: 39.2555% ( 260) 00:07:40.931 10586.585 - 10636.997: 41.5993% ( 255) 00:07:40.931 10636.997 - 10687.409: 43.9890% ( 260) 00:07:40.931 10687.409 - 10737.822: 46.0386% ( 223) 00:07:40.931 10737.822 - 10788.234: 48.3548% ( 252) 00:07:40.931 10788.234 - 10838.646: 50.3217% ( 214) 00:07:40.931 10838.646 - 10889.058: 52.3805% ( 224) 00:07:40.931 10889.058 - 10939.471: 54.2096% ( 199) 00:07:40.931 10939.471 - 10989.883: 55.9283% ( 187) 00:07:40.931 10989.883 - 11040.295: 57.5643% ( 178) 00:07:40.931 11040.295 - 11090.708: 58.9338% ( 149) 00:07:40.931 11090.708 - 11141.120: 60.4136% ( 161) 00:07:40.931 11141.120 - 11191.532: 61.5625% ( 125) 00:07:40.931 11191.532 - 11241.945: 62.9779% ( 154) 00:07:40.931 11241.945 - 11292.357: 64.1728% ( 130) 00:07:40.931 11292.357 - 11342.769: 65.3676% ( 130) 00:07:40.931 11342.769 - 11393.182: 66.3143% ( 103) 00:07:40.931 11393.182 - 11443.594: 67.3713% ( 115) 00:07:40.931 11443.594 - 11494.006: 68.2445% ( 95) 00:07:40.931 11494.006 - 11544.418: 68.9430% ( 76) 00:07:40.931 11544.418 - 11594.831: 69.6507% ( 77) 00:07:40.931 11594.831 - 11645.243: 70.5239% ( 95) 00:07:40.931 11645.243 - 11695.655: 71.1305% ( 66) 00:07:40.931 11695.655 - 11746.068: 71.7096% ( 63) 00:07:40.931 11746.068 - 11796.480: 72.2335% ( 57) 00:07:40.931 11796.480 - 11846.892: 72.7298% ( 54) 00:07:40.931 11846.892 - 11897.305: 73.3456% ( 67) 00:07:40.931 11897.305 - 11947.717: 73.8879% ( 59) 00:07:40.931 11947.717 - 11998.129: 74.2923% ( 44) 00:07:40.931 11998.129 - 12048.542: 74.6967% ( 44) 00:07:40.931 12048.542 - 12098.954: 75.2849% ( 64) 00:07:40.931 12098.954 - 12149.366: 75.6526% ( 40) 00:07:40.931 12149.366 - 12199.778: 76.0570% ( 44) 00:07:40.931 12199.778 - 12250.191: 76.4430% ( 42) 00:07:40.931 12250.191 - 12300.603: 76.7923% ( 38) 00:07:40.931 12300.603 - 12351.015: 77.1599% ( 40) 00:07:40.931 12351.015 - 12401.428: 77.4265% ( 29) 00:07:40.931 12401.428 - 12451.840: 77.6746% ( 27) 00:07:40.931 12451.840 - 12502.252: 77.9136% ( 26) 00:07:40.931 12502.252 - 12552.665: 78.2537% ( 37) 00:07:40.931 12552.665 - 12603.077: 78.4375% ( 20) 00:07:40.931 12603.077 - 12653.489: 78.6765% ( 26) 00:07:40.931 12653.489 - 12703.902: 78.9246% ( 27) 00:07:40.931 12703.902 - 12754.314: 79.1085% ( 20) 00:07:40.931 12754.314 - 12804.726: 79.3199% ( 23) 00:07:40.931 12804.726 - 12855.138: 79.4577% ( 15) 00:07:40.931 12855.138 - 12905.551: 79.6599% ( 22) 00:07:40.931 12905.551 - 13006.375: 80.0460% ( 42) 00:07:40.931 13006.375 - 13107.200: 80.4963% ( 49) 00:07:40.931 13107.200 - 13208.025: 80.8640% ( 40) 00:07:40.931 13208.025 - 13308.849: 81.2684% ( 44) 00:07:40.931 13308.849 - 13409.674: 81.7371% ( 51) 00:07:40.931 13409.674 - 13510.498: 82.3162% ( 63) 00:07:40.931 13510.498 - 13611.323: 82.8768% ( 61) 00:07:40.931 13611.323 - 13712.148: 83.2445% ( 40) 00:07:40.931 13712.148 - 13812.972: 83.9154% ( 73) 00:07:40.931 13812.972 - 13913.797: 84.2739% ( 39) 00:07:40.931 13913.797 - 14014.622: 84.8254% ( 60) 00:07:40.931 14014.622 - 14115.446: 85.3860% ( 61) 00:07:40.931 14115.446 - 14216.271: 86.0570% ( 73) 00:07:40.932 14216.271 - 14317.095: 86.6085% ( 60) 00:07:40.932 14317.095 - 14417.920: 87.2610% ( 71) 00:07:40.932 14417.920 - 14518.745: 87.8309% ( 62) 00:07:40.932 14518.745 - 14619.569: 88.3180% ( 53) 00:07:40.932 14619.569 - 14720.394: 88.8695% ( 60) 00:07:40.932 14720.394 - 14821.218: 89.4026% ( 58) 00:07:40.932 14821.218 - 14922.043: 89.9724% ( 62) 00:07:40.932 14922.043 - 15022.868: 90.4871% ( 56) 00:07:40.932 15022.868 - 15123.692: 91.1857% ( 76) 00:07:40.932 15123.692 - 15224.517: 91.8107% ( 68) 00:07:40.932 15224.517 - 15325.342: 92.1691% ( 39) 00:07:40.932 15325.342 - 15426.166: 92.6195% ( 49) 00:07:40.932 15426.166 - 15526.991: 92.8401% ( 24) 00:07:40.932 15526.991 - 15627.815: 93.2261% ( 42) 00:07:40.932 15627.815 - 15728.640: 93.7040% ( 52) 00:07:40.932 15728.640 - 15829.465: 94.1268% ( 46) 00:07:40.932 15829.465 - 15930.289: 94.5772% ( 49) 00:07:40.932 15930.289 - 16031.114: 94.9265% ( 38) 00:07:40.932 16031.114 - 16131.938: 95.3125% ( 42) 00:07:40.932 16131.938 - 16232.763: 95.6066% ( 32) 00:07:40.932 16232.763 - 16333.588: 95.8272% ( 24) 00:07:40.932 16333.588 - 16434.412: 96.1765% ( 38) 00:07:40.932 16434.412 - 16535.237: 96.4430% ( 29) 00:07:40.932 16535.237 - 16636.062: 96.7188% ( 30) 00:07:40.932 16636.062 - 16736.886: 96.9853% ( 29) 00:07:40.932 16736.886 - 16837.711: 97.2978% ( 34) 00:07:40.932 16837.711 - 16938.535: 97.4265% ( 14) 00:07:40.932 16938.535 - 17039.360: 97.5460% ( 13) 00:07:40.932 17039.360 - 17140.185: 97.6746% ( 14) 00:07:40.932 17140.185 - 17241.009: 97.8217% ( 16) 00:07:40.932 17241.009 - 17341.834: 97.9963% ( 19) 00:07:40.932 17341.834 - 17442.658: 98.0974% ( 11) 00:07:40.932 17442.658 - 17543.483: 98.2537% ( 17) 00:07:40.932 17543.483 - 17644.308: 98.4099% ( 17) 00:07:40.932 17644.308 - 17745.132: 98.5110% ( 11) 00:07:40.932 17745.132 - 17845.957: 98.6029% ( 10) 00:07:40.932 17845.957 - 17946.782: 98.6765% ( 8) 00:07:40.932 17946.782 - 18047.606: 98.7224% ( 5) 00:07:40.932 18047.606 - 18148.431: 98.7592% ( 4) 00:07:40.932 18148.431 - 18249.255: 98.8051% ( 5) 00:07:40.932 18249.255 - 18350.080: 98.8235% ( 2) 00:07:40.932 25206.154 - 25306.978: 98.8327% ( 1) 00:07:40.932 25306.978 - 25407.803: 98.8695% ( 4) 00:07:40.932 25407.803 - 25508.628: 98.9062% ( 4) 00:07:40.932 25508.628 - 25609.452: 98.9338% ( 3) 00:07:40.932 25609.452 - 25710.277: 98.9798% ( 5) 00:07:40.932 25710.277 - 25811.102: 99.0074% ( 3) 00:07:40.932 25811.102 - 26012.751: 99.0809% ( 8) 00:07:40.932 26012.751 - 26214.400: 99.1452% ( 7) 00:07:40.932 26214.400 - 26416.049: 99.2279% ( 9) 00:07:40.932 26416.049 - 26617.698: 99.3015% ( 8) 00:07:40.932 26617.698 - 26819.348: 99.3566% ( 6) 00:07:40.932 26819.348 - 27020.997: 99.4118% ( 6) 00:07:40.932 35086.966 - 35288.615: 99.4485% ( 4) 00:07:40.932 35288.615 - 35490.265: 99.5221% ( 8) 00:07:40.932 35490.265 - 35691.914: 99.5864% ( 7) 00:07:40.932 35691.914 - 35893.563: 99.6232% ( 4) 00:07:40.932 35893.563 - 36095.212: 99.7426% ( 13) 00:07:40.932 36095.212 - 36296.862: 99.7978% ( 6) 00:07:40.932 36296.862 - 36498.511: 99.8621% ( 7) 00:07:40.932 36498.511 - 36700.160: 99.9357% ( 8) 00:07:40.932 36700.160 - 36901.809: 100.0000% ( 7) 00:07:40.932 00:07:40.932 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:40.932 ============================================================================== 00:07:40.932 Range in us Cumulative IO count 00:07:40.932 9225.452 - 9275.865: 0.0276% ( 3) 00:07:40.932 9275.865 - 9326.277: 0.1195% ( 10) 00:07:40.932 9326.277 - 9376.689: 0.2665% ( 16) 00:07:40.932 9376.689 - 9427.102: 0.3676% ( 11) 00:07:40.932 9427.102 - 9477.514: 0.5147% ( 16) 00:07:40.932 9477.514 - 9527.926: 0.8548% ( 37) 00:07:40.932 9527.926 - 9578.338: 1.2408% ( 42) 00:07:40.932 9578.338 - 9628.751: 1.9210% ( 74) 00:07:40.932 9628.751 - 9679.163: 2.6379% ( 78) 00:07:40.932 9679.163 - 9729.575: 3.4283% ( 86) 00:07:40.932 9729.575 - 9779.988: 4.3474% ( 100) 00:07:40.932 9779.988 - 9830.400: 5.4228% ( 117) 00:07:40.932 9830.400 - 9880.812: 6.6176% ( 130) 00:07:40.932 9880.812 - 9931.225: 7.9779% ( 148) 00:07:40.932 9931.225 - 9981.637: 9.5404% ( 170) 00:07:40.932 9981.637 - 10032.049: 11.6085% ( 225) 00:07:40.932 10032.049 - 10082.462: 13.8603% ( 245) 00:07:40.932 10082.462 - 10132.874: 16.3419% ( 270) 00:07:40.932 10132.874 - 10183.286: 18.9798% ( 287) 00:07:40.932 10183.286 - 10233.698: 21.6085% ( 286) 00:07:40.932 10233.698 - 10284.111: 24.4026% ( 304) 00:07:40.932 10284.111 - 10334.523: 26.9761% ( 280) 00:07:40.932 10334.523 - 10384.935: 29.7610% ( 303) 00:07:40.932 10384.935 - 10435.348: 32.4449% ( 292) 00:07:40.932 10435.348 - 10485.760: 34.9265% ( 270) 00:07:40.932 10485.760 - 10536.172: 37.5460% ( 285) 00:07:40.932 10536.172 - 10586.585: 40.1287% ( 281) 00:07:40.932 10586.585 - 10636.997: 42.7114% ( 281) 00:07:40.932 10636.997 - 10687.409: 45.2665% ( 278) 00:07:40.932 10687.409 - 10737.822: 47.6379% ( 258) 00:07:40.932 10737.822 - 10788.234: 49.8621% ( 242) 00:07:40.932 10788.234 - 10838.646: 51.9301% ( 225) 00:07:40.932 10838.646 - 10889.058: 53.6949% ( 192) 00:07:40.932 10889.058 - 10939.471: 55.3952% ( 185) 00:07:40.932 10939.471 - 10989.883: 57.0588% ( 181) 00:07:40.932 10989.883 - 11040.295: 58.5662% ( 164) 00:07:40.932 11040.295 - 11090.708: 60.0000% ( 156) 00:07:40.932 11090.708 - 11141.120: 61.2592% ( 137) 00:07:40.932 11141.120 - 11191.532: 62.5092% ( 136) 00:07:40.932 11191.532 - 11241.945: 63.7132% ( 131) 00:07:40.932 11241.945 - 11292.357: 64.8438% ( 123) 00:07:40.932 11292.357 - 11342.769: 65.8640% ( 111) 00:07:40.932 11342.769 - 11393.182: 66.7739% ( 99) 00:07:40.932 11393.182 - 11443.594: 67.7482% ( 106) 00:07:40.932 11443.594 - 11494.006: 68.6121% ( 94) 00:07:40.932 11494.006 - 11544.418: 69.3750% ( 83) 00:07:40.932 11544.418 - 11594.831: 70.1195% ( 81) 00:07:40.932 11594.831 - 11645.243: 70.8824% ( 83) 00:07:40.932 11645.243 - 11695.655: 71.5349% ( 71) 00:07:40.932 11695.655 - 11746.068: 72.1599% ( 68) 00:07:40.932 11746.068 - 11796.480: 72.6654% ( 55) 00:07:40.932 11796.480 - 11846.892: 73.1434% ( 52) 00:07:40.932 11846.892 - 11897.305: 73.6029% ( 50) 00:07:40.932 11897.305 - 11947.717: 74.0257% ( 46) 00:07:40.932 11947.717 - 11998.129: 74.4393% ( 45) 00:07:40.932 11998.129 - 12048.542: 74.8529% ( 45) 00:07:40.932 12048.542 - 12098.954: 75.1930% ( 37) 00:07:40.932 12098.954 - 12149.366: 75.5423% ( 38) 00:07:40.933 12149.366 - 12199.778: 75.8824% ( 37) 00:07:40.933 12199.778 - 12250.191: 76.1673% ( 31) 00:07:40.933 12250.191 - 12300.603: 76.3971% ( 25) 00:07:40.933 12300.603 - 12351.015: 76.6360% ( 26) 00:07:40.933 12351.015 - 12401.428: 76.9577% ( 35) 00:07:40.933 12401.428 - 12451.840: 77.2151% ( 28) 00:07:40.933 12451.840 - 12502.252: 77.5000% ( 31) 00:07:40.933 12502.252 - 12552.665: 77.8401% ( 37) 00:07:40.933 12552.665 - 12603.077: 78.1526% ( 34) 00:07:40.933 12603.077 - 12653.489: 78.4283% ( 30) 00:07:40.933 12653.489 - 12703.902: 78.7224% ( 32) 00:07:40.933 12703.902 - 12754.314: 79.0533% ( 36) 00:07:40.933 12754.314 - 12804.726: 79.3566% ( 33) 00:07:40.933 12804.726 - 12855.138: 79.6783% ( 35) 00:07:40.933 12855.138 - 12905.551: 79.9908% ( 34) 00:07:40.933 12905.551 - 13006.375: 80.6618% ( 73) 00:07:40.933 13006.375 - 13107.200: 81.3235% ( 72) 00:07:40.933 13107.200 - 13208.025: 81.8566% ( 58) 00:07:40.933 13208.025 - 13308.849: 82.3070% ( 49) 00:07:40.933 13308.849 - 13409.674: 82.7390% ( 47) 00:07:40.933 13409.674 - 13510.498: 83.0974% ( 39) 00:07:40.933 13510.498 - 13611.323: 83.4375% ( 37) 00:07:40.933 13611.323 - 13712.148: 83.8419% ( 44) 00:07:40.933 13712.148 - 13812.972: 84.1912% ( 38) 00:07:40.933 13812.972 - 13913.797: 84.5221% ( 36) 00:07:40.933 13913.797 - 14014.622: 84.9081% ( 42) 00:07:40.933 14014.622 - 14115.446: 85.3309% ( 46) 00:07:40.933 14115.446 - 14216.271: 85.8088% ( 52) 00:07:40.933 14216.271 - 14317.095: 86.3235% ( 56) 00:07:40.933 14317.095 - 14417.920: 86.8474% ( 57) 00:07:40.933 14417.920 - 14518.745: 87.3713% ( 57) 00:07:40.933 14518.745 - 14619.569: 87.9228% ( 60) 00:07:40.933 14619.569 - 14720.394: 88.3915% ( 51) 00:07:40.933 14720.394 - 14821.218: 88.9154% ( 57) 00:07:40.933 14821.218 - 14922.043: 89.5129% ( 65) 00:07:40.933 14922.043 - 15022.868: 90.0735% ( 61) 00:07:40.933 15022.868 - 15123.692: 90.7169% ( 70) 00:07:40.933 15123.692 - 15224.517: 91.3327% ( 67) 00:07:40.933 15224.517 - 15325.342: 91.8290% ( 54) 00:07:40.933 15325.342 - 15426.166: 92.4357% ( 66) 00:07:40.933 15426.166 - 15526.991: 93.0423% ( 66) 00:07:40.933 15526.991 - 15627.815: 93.5938% ( 60) 00:07:40.933 15627.815 - 15728.640: 94.1176% ( 57) 00:07:40.933 15728.640 - 15829.465: 94.5772% ( 50) 00:07:40.933 15829.465 - 15930.289: 95.0092% ( 47) 00:07:40.933 15930.289 - 16031.114: 95.4044% ( 43) 00:07:40.933 16031.114 - 16131.938: 95.7537% ( 38) 00:07:40.933 16131.938 - 16232.763: 96.0570% ( 33) 00:07:40.933 16232.763 - 16333.588: 96.3235% ( 29) 00:07:40.933 16333.588 - 16434.412: 96.5257% ( 22) 00:07:40.933 16434.412 - 16535.237: 96.7096% ( 20) 00:07:40.933 16535.237 - 16636.062: 96.9301% ( 24) 00:07:40.933 16636.062 - 16736.886: 97.1599% ( 25) 00:07:40.933 16736.886 - 16837.711: 97.3897% ( 25) 00:07:40.933 16837.711 - 16938.535: 97.5551% ( 18) 00:07:40.933 16938.535 - 17039.360: 97.7206% ( 18) 00:07:40.933 17039.360 - 17140.185: 97.8585% ( 15) 00:07:40.933 17140.185 - 17241.009: 98.0239% ( 18) 00:07:40.933 17241.009 - 17341.834: 98.2169% ( 21) 00:07:40.933 17341.834 - 17442.658: 98.3732% ( 17) 00:07:40.933 17442.658 - 17543.483: 98.4835% ( 12) 00:07:40.933 17543.483 - 17644.308: 98.5386% ( 6) 00:07:40.933 17644.308 - 17745.132: 98.5938% ( 6) 00:07:40.933 17745.132 - 17845.957: 98.6489% ( 6) 00:07:40.933 17845.957 - 17946.782: 98.7132% ( 7) 00:07:40.933 17946.782 - 18047.606: 98.7684% ( 6) 00:07:40.933 18047.606 - 18148.431: 98.8235% ( 6) 00:07:40.933 24500.382 - 24601.206: 98.8327% ( 1) 00:07:40.933 24601.206 - 24702.031: 98.8695% ( 4) 00:07:40.933 24702.031 - 24802.855: 98.9062% ( 4) 00:07:40.933 24802.855 - 24903.680: 98.9338% ( 3) 00:07:40.933 24903.680 - 25004.505: 98.9706% ( 4) 00:07:40.933 25004.505 - 25105.329: 99.0074% ( 4) 00:07:40.933 25105.329 - 25206.154: 99.0441% ( 4) 00:07:40.933 25206.154 - 25306.978: 99.0809% ( 4) 00:07:40.933 25306.978 - 25407.803: 99.1176% ( 4) 00:07:40.933 25407.803 - 25508.628: 99.1636% ( 5) 00:07:40.933 25508.628 - 25609.452: 99.2004% ( 4) 00:07:40.933 25609.452 - 25710.277: 99.2371% ( 4) 00:07:40.933 25710.277 - 25811.102: 99.2739% ( 4) 00:07:40.933 25811.102 - 26012.751: 99.3566% ( 9) 00:07:40.933 26012.751 - 26214.400: 99.4118% ( 6) 00:07:40.933 33877.071 - 34078.720: 99.4301% ( 2) 00:07:40.933 34078.720 - 34280.369: 99.5037% ( 8) 00:07:40.933 34280.369 - 34482.018: 99.5772% ( 8) 00:07:40.933 34482.018 - 34683.668: 99.6507% ( 8) 00:07:40.933 34683.668 - 34885.317: 99.7243% ( 8) 00:07:40.933 34885.317 - 35086.966: 99.8070% ( 9) 00:07:40.933 35086.966 - 35288.615: 99.8805% ( 8) 00:07:40.933 35288.615 - 35490.265: 99.9632% ( 9) 00:07:40.933 35490.265 - 35691.914: 100.0000% ( 4) 00:07:40.933 00:07:40.933 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:40.933 ============================================================================== 00:07:40.933 Range in us Cumulative IO count 00:07:40.933 8318.031 - 8368.443: 0.0276% ( 3) 00:07:40.933 8368.443 - 8418.855: 0.0551% ( 3) 00:07:40.933 8418.855 - 8469.268: 0.0827% ( 3) 00:07:40.933 8469.268 - 8519.680: 0.1011% ( 2) 00:07:40.933 8519.680 - 8570.092: 0.1287% ( 3) 00:07:40.933 8570.092 - 8620.505: 0.1471% ( 2) 00:07:40.933 8620.505 - 8670.917: 0.1746% ( 3) 00:07:40.933 8670.917 - 8721.329: 0.2022% ( 3) 00:07:40.933 8721.329 - 8771.742: 0.2206% ( 2) 00:07:40.933 8771.742 - 8822.154: 0.2574% ( 4) 00:07:40.933 8822.154 - 8872.566: 0.2941% ( 4) 00:07:40.933 8872.566 - 8922.978: 0.3125% ( 2) 00:07:40.933 8922.978 - 8973.391: 0.3860% ( 8) 00:07:40.933 8973.391 - 9023.803: 0.4412% ( 6) 00:07:40.933 9023.803 - 9074.215: 0.5055% ( 7) 00:07:40.933 9074.215 - 9124.628: 0.5882% ( 9) 00:07:40.933 9124.628 - 9175.040: 0.6801% ( 10) 00:07:40.933 9175.040 - 9225.452: 0.8456% ( 18) 00:07:40.933 9225.452 - 9275.865: 1.2040% ( 39) 00:07:40.933 9275.865 - 9326.277: 1.5349% ( 36) 00:07:40.933 9326.277 - 9376.689: 1.9301% ( 43) 00:07:40.934 9376.689 - 9427.102: 2.3897% ( 50) 00:07:40.934 9427.102 - 9477.514: 2.9136% ( 57) 00:07:40.934 9477.514 - 9527.926: 3.3640% ( 49) 00:07:40.934 9527.926 - 9578.338: 3.8603% ( 54) 00:07:40.934 9578.338 - 9628.751: 4.4945% ( 69) 00:07:40.934 9628.751 - 9679.163: 5.1838% ( 75) 00:07:40.934 9679.163 - 9729.575: 6.2132% ( 112) 00:07:40.934 9729.575 - 9779.988: 7.3070% ( 119) 00:07:40.934 9779.988 - 9830.400: 8.5570% ( 136) 00:07:40.934 9830.400 - 9880.812: 9.8162% ( 137) 00:07:40.934 9880.812 - 9931.225: 11.2868% ( 160) 00:07:40.934 9931.225 - 9981.637: 12.8033% ( 165) 00:07:40.934 9981.637 - 10032.049: 14.3934% ( 173) 00:07:40.934 10032.049 - 10082.462: 16.2776% ( 205) 00:07:40.934 10082.462 - 10132.874: 18.4099% ( 232) 00:07:40.934 10132.874 - 10183.286: 20.7721% ( 257) 00:07:40.934 10183.286 - 10233.698: 23.1342% ( 257) 00:07:40.934 10233.698 - 10284.111: 25.5239% ( 260) 00:07:40.934 10284.111 - 10334.523: 27.7114% ( 238) 00:07:40.934 10334.523 - 10384.935: 29.9540% ( 244) 00:07:40.934 10384.935 - 10435.348: 32.2886% ( 254) 00:07:40.934 10435.348 - 10485.760: 34.6415% ( 256) 00:07:40.934 10485.760 - 10536.172: 36.8658% ( 242) 00:07:40.934 10536.172 - 10586.585: 39.1360% ( 247) 00:07:40.934 10586.585 - 10636.997: 41.4614% ( 253) 00:07:40.934 10636.997 - 10687.409: 43.7592% ( 250) 00:07:40.934 10687.409 - 10737.822: 45.8915% ( 232) 00:07:40.934 10737.822 - 10788.234: 48.0055% ( 230) 00:07:40.934 10788.234 - 10838.646: 50.1471% ( 233) 00:07:40.934 10838.646 - 10889.058: 52.2243% ( 226) 00:07:40.934 10889.058 - 10939.471: 54.0349% ( 197) 00:07:40.934 10939.471 - 10989.883: 55.9467% ( 208) 00:07:40.934 10989.883 - 11040.295: 57.5919% ( 179) 00:07:40.934 11040.295 - 11090.708: 59.1176% ( 166) 00:07:40.934 11090.708 - 11141.120: 60.5239% ( 153) 00:07:40.934 11141.120 - 11191.532: 61.6912% ( 127) 00:07:40.934 11191.532 - 11241.945: 62.8952% ( 131) 00:07:40.934 11241.945 - 11292.357: 63.9890% ( 119) 00:07:40.934 11292.357 - 11342.769: 65.0551% ( 116) 00:07:40.934 11342.769 - 11393.182: 65.9926% ( 102) 00:07:40.934 11393.182 - 11443.594: 66.8658% ( 95) 00:07:40.934 11443.594 - 11494.006: 67.7941% ( 101) 00:07:40.934 11494.006 - 11544.418: 68.4651% ( 73) 00:07:40.934 11544.418 - 11594.831: 69.0625% ( 65) 00:07:40.934 11594.831 - 11645.243: 69.6324% ( 62) 00:07:40.934 11645.243 - 11695.655: 70.2482% ( 67) 00:07:40.934 11695.655 - 11746.068: 70.8548% ( 66) 00:07:40.934 11746.068 - 11796.480: 71.3879% ( 58) 00:07:40.934 11796.480 - 11846.892: 71.9669% ( 63) 00:07:40.934 11846.892 - 11897.305: 72.5276% ( 61) 00:07:40.934 11897.305 - 11947.717: 73.1342% ( 66) 00:07:40.934 11947.717 - 11998.129: 73.7316% ( 65) 00:07:40.934 11998.129 - 12048.542: 74.3382% ( 66) 00:07:40.934 12048.542 - 12098.954: 74.8713% ( 58) 00:07:40.934 12098.954 - 12149.366: 75.3493% ( 52) 00:07:40.934 12149.366 - 12199.778: 75.8824% ( 58) 00:07:40.934 12199.778 - 12250.191: 76.3511% ( 51) 00:07:40.934 12250.191 - 12300.603: 76.8199% ( 51) 00:07:40.934 12300.603 - 12351.015: 77.1875% ( 40) 00:07:40.934 12351.015 - 12401.428: 77.5276% ( 37) 00:07:40.934 12401.428 - 12451.840: 77.8768% ( 38) 00:07:40.934 12451.840 - 12502.252: 78.1985% ( 35) 00:07:40.934 12502.252 - 12552.665: 78.5938% ( 43) 00:07:40.934 12552.665 - 12603.077: 78.9062% ( 34) 00:07:40.934 12603.077 - 12653.489: 79.2831% ( 41) 00:07:40.934 12653.489 - 12703.902: 79.6232% ( 37) 00:07:40.934 12703.902 - 12754.314: 79.9081% ( 31) 00:07:40.934 12754.314 - 12804.726: 80.1379% ( 25) 00:07:40.934 12804.726 - 12855.138: 80.3676% ( 25) 00:07:40.934 12855.138 - 12905.551: 80.6066% ( 26) 00:07:40.934 12905.551 - 13006.375: 81.0938% ( 53) 00:07:40.934 13006.375 - 13107.200: 81.4890% ( 43) 00:07:40.934 13107.200 - 13208.025: 81.8566% ( 40) 00:07:40.934 13208.025 - 13308.849: 82.2151% ( 39) 00:07:40.934 13308.849 - 13409.674: 82.5827% ( 40) 00:07:40.934 13409.674 - 13510.498: 83.0331% ( 49) 00:07:40.934 13510.498 - 13611.323: 83.3640% ( 36) 00:07:40.934 13611.323 - 13712.148: 83.8051% ( 48) 00:07:40.934 13712.148 - 13812.972: 84.4026% ( 65) 00:07:40.934 13812.972 - 13913.797: 84.9724% ( 62) 00:07:40.934 13913.797 - 14014.622: 85.6710% ( 76) 00:07:40.934 14014.622 - 14115.446: 86.2500% ( 63) 00:07:40.934 14115.446 - 14216.271: 86.7463% ( 54) 00:07:40.934 14216.271 - 14317.095: 87.2794% ( 58) 00:07:40.934 14317.095 - 14417.920: 87.7941% ( 56) 00:07:40.934 14417.920 - 14518.745: 88.3272% ( 58) 00:07:40.934 14518.745 - 14619.569: 88.8327% ( 55) 00:07:40.934 14619.569 - 14720.394: 89.3842% ( 60) 00:07:40.934 14720.394 - 14821.218: 89.9357% ( 60) 00:07:40.934 14821.218 - 14922.043: 90.3493% ( 45) 00:07:40.934 14922.043 - 15022.868: 90.7077% ( 39) 00:07:40.934 15022.868 - 15123.692: 91.0846% ( 41) 00:07:40.934 15123.692 - 15224.517: 91.4706% ( 42) 00:07:40.934 15224.517 - 15325.342: 91.8658% ( 43) 00:07:40.934 15325.342 - 15426.166: 92.2151% ( 38) 00:07:40.934 15426.166 - 15526.991: 92.5643% ( 38) 00:07:40.934 15526.991 - 15627.815: 93.0423% ( 52) 00:07:40.934 15627.815 - 15728.640: 93.4743% ( 47) 00:07:40.934 15728.640 - 15829.465: 93.8511% ( 41) 00:07:40.934 15829.465 - 15930.289: 94.2463% ( 43) 00:07:40.934 15930.289 - 16031.114: 94.5772% ( 36) 00:07:40.934 16031.114 - 16131.938: 94.9449% ( 40) 00:07:40.934 16131.938 - 16232.763: 95.2574% ( 34) 00:07:40.934 16232.763 - 16333.588: 95.6066% ( 38) 00:07:40.934 16333.588 - 16434.412: 95.9007% ( 32) 00:07:40.934 16434.412 - 16535.237: 96.3051% ( 44) 00:07:40.934 16535.237 - 16636.062: 96.6820% ( 41) 00:07:40.934 16636.062 - 16736.886: 96.9301% ( 27) 00:07:40.934 16736.886 - 16837.711: 97.1599% ( 25) 00:07:40.934 16837.711 - 16938.535: 97.3897% ( 25) 00:07:40.934 16938.535 - 17039.360: 97.6195% ( 25) 00:07:40.934 17039.360 - 17140.185: 97.8309% ( 23) 00:07:40.934 17140.185 - 17241.009: 98.0147% ( 20) 00:07:40.934 17241.009 - 17341.834: 98.1893% ( 19) 00:07:40.934 17341.834 - 17442.658: 98.3364% ( 16) 00:07:40.934 17442.658 - 17543.483: 98.4743% ( 15) 00:07:40.934 17543.483 - 17644.308: 98.5662% ( 10) 00:07:40.934 17644.308 - 17745.132: 98.6489% ( 9) 00:07:40.934 17745.132 - 17845.957: 98.7132% ( 7) 00:07:40.934 17845.957 - 17946.782: 98.7684% ( 6) 00:07:40.934 17946.782 - 18047.606: 98.8235% ( 6) 00:07:40.934 25105.329 - 25206.154: 98.8419% ( 2) 00:07:40.935 25206.154 - 25306.978: 98.8695% ( 3) 00:07:40.935 25306.978 - 25407.803: 98.8971% ( 3) 00:07:40.935 25407.803 - 25508.628: 98.9246% ( 3) 00:07:40.935 25508.628 - 25609.452: 98.9522% ( 3) 00:07:40.935 25609.452 - 25710.277: 98.9982% ( 5) 00:07:40.935 25710.277 - 25811.102: 99.0349% ( 4) 00:07:40.935 25811.102 - 26012.751: 99.1085% ( 8) 00:07:40.935 26012.751 - 26214.400: 99.1728% ( 7) 00:07:40.935 26214.400 - 26416.049: 99.2555% ( 9) 00:07:40.935 26416.049 - 26617.698: 99.3290% ( 8) 00:07:40.935 26617.698 - 26819.348: 99.4118% ( 9) 00:07:40.935 33070.474 - 33272.123: 99.4210% ( 1) 00:07:40.935 33272.123 - 33473.772: 99.4853% ( 7) 00:07:40.935 33473.772 - 33675.422: 99.5496% ( 7) 00:07:40.935 33675.422 - 33877.071: 99.6048% ( 6) 00:07:40.935 33877.071 - 34078.720: 99.6691% ( 7) 00:07:40.935 34078.720 - 34280.369: 99.7426% ( 8) 00:07:40.935 34280.369 - 34482.018: 99.8162% ( 8) 00:07:40.935 34482.018 - 34683.668: 99.8805% ( 7) 00:07:40.935 34683.668 - 34885.317: 99.9540% ( 8) 00:07:40.935 34885.317 - 35086.966: 100.0000% ( 5) 00:07:40.935 00:07:40.935 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:40.935 ============================================================================== 00:07:40.935 Range in us Cumulative IO count 00:07:40.935 8519.680 - 8570.092: 0.0183% ( 2) 00:07:40.935 8570.092 - 8620.505: 0.0548% ( 4) 00:07:40.935 8620.505 - 8670.917: 0.0731% ( 2) 00:07:40.935 8670.917 - 8721.329: 0.1188% ( 5) 00:07:40.935 8721.329 - 8771.742: 0.1371% ( 2) 00:07:40.935 8771.742 - 8822.154: 0.1553% ( 2) 00:07:40.935 8822.154 - 8872.566: 0.1736% ( 2) 00:07:40.935 8872.566 - 8922.978: 0.1919% ( 2) 00:07:40.935 8922.978 - 8973.391: 0.2833% ( 10) 00:07:40.935 8973.391 - 9023.803: 0.3838% ( 11) 00:07:40.935 9023.803 - 9074.215: 0.4843% ( 11) 00:07:40.935 9074.215 - 9124.628: 0.5757% ( 10) 00:07:40.935 9124.628 - 9175.040: 0.6488% ( 8) 00:07:40.935 9175.040 - 9225.452: 0.7950% ( 16) 00:07:40.935 9225.452 - 9275.865: 0.9412% ( 16) 00:07:40.935 9275.865 - 9326.277: 1.1879% ( 27) 00:07:40.935 9326.277 - 9376.689: 1.4529% ( 29) 00:07:40.935 9376.689 - 9427.102: 1.6904% ( 26) 00:07:40.935 9427.102 - 9477.514: 2.0559% ( 40) 00:07:40.935 9477.514 - 9527.926: 2.5676% ( 56) 00:07:40.935 9527.926 - 9578.338: 3.1524% ( 64) 00:07:40.935 9578.338 - 9628.751: 3.8469% ( 76) 00:07:40.935 9628.751 - 9679.163: 4.7332% ( 97) 00:07:40.935 9679.163 - 9729.575: 5.8114% ( 118) 00:07:40.935 9729.575 - 9779.988: 6.9719% ( 127) 00:07:40.935 9779.988 - 9830.400: 8.0866% ( 122) 00:07:40.935 9830.400 - 9880.812: 9.5212% ( 157) 00:07:40.935 9880.812 - 9931.225: 11.0197% ( 164) 00:07:40.935 9931.225 - 9981.637: 12.5183% ( 164) 00:07:40.935 9981.637 - 10032.049: 14.1356% ( 177) 00:07:40.935 10032.049 - 10082.462: 15.9814% ( 202) 00:07:40.935 10082.462 - 10132.874: 18.0738% ( 229) 00:07:40.935 10132.874 - 10183.286: 20.2851% ( 242) 00:07:40.935 10183.286 - 10233.698: 22.7065% ( 265) 00:07:40.935 10233.698 - 10284.111: 25.0274% ( 254) 00:07:40.935 10284.111 - 10334.523: 27.6590% ( 288) 00:07:40.935 10334.523 - 10384.935: 30.2540% ( 284) 00:07:40.935 10384.935 - 10435.348: 32.6754% ( 265) 00:07:40.935 10435.348 - 10485.760: 35.2248% ( 279) 00:07:40.935 10485.760 - 10536.172: 37.7284% ( 274) 00:07:40.935 10536.172 - 10586.585: 40.1224% ( 262) 00:07:40.935 10586.585 - 10636.997: 42.3794% ( 247) 00:07:40.935 10636.997 - 10687.409: 44.4719% ( 229) 00:07:40.935 10687.409 - 10737.822: 46.4821% ( 220) 00:07:40.935 10737.822 - 10788.234: 48.4284% ( 213) 00:07:40.935 10788.234 - 10838.646: 50.5300% ( 230) 00:07:40.935 10838.646 - 10889.058: 52.5311% ( 219) 00:07:40.935 10889.058 - 10939.471: 54.4499% ( 210) 00:07:40.935 10939.471 - 10989.883: 55.9393% ( 163) 00:07:40.935 10989.883 - 11040.295: 57.3374% ( 153) 00:07:40.935 11040.295 - 11090.708: 58.8542% ( 166) 00:07:40.935 11090.708 - 11141.120: 60.0329% ( 129) 00:07:40.935 11141.120 - 11191.532: 61.1294% ( 120) 00:07:40.935 11191.532 - 11241.945: 62.1528% ( 112) 00:07:40.935 11241.945 - 11292.357: 63.1396% ( 108) 00:07:40.935 11292.357 - 11342.769: 64.0534% ( 100) 00:07:40.935 11342.769 - 11393.182: 65.0037% ( 104) 00:07:40.935 11393.182 - 11443.594: 66.0545% ( 115) 00:07:40.935 11443.594 - 11494.006: 66.9773% ( 101) 00:07:40.935 11494.006 - 11544.418: 67.8637% ( 97) 00:07:40.935 11544.418 - 11594.831: 68.6404% ( 85) 00:07:40.935 11594.831 - 11645.243: 69.4536% ( 89) 00:07:40.935 11645.243 - 11695.655: 70.1937% ( 81) 00:07:40.935 11695.655 - 11746.068: 70.8333% ( 70) 00:07:40.935 11746.068 - 11796.480: 71.4090% ( 63) 00:07:40.935 11796.480 - 11846.892: 71.9938% ( 64) 00:07:40.935 11846.892 - 11897.305: 72.6060% ( 67) 00:07:40.935 11897.305 - 11947.717: 73.1725% ( 62) 00:07:40.935 11947.717 - 11998.129: 73.7116% ( 59) 00:07:40.936 11998.129 - 12048.542: 74.2599% ( 60) 00:07:40.936 12048.542 - 12098.954: 74.8538% ( 65) 00:07:40.936 12098.954 - 12149.366: 75.4660% ( 67) 00:07:40.936 12149.366 - 12199.778: 75.9594% ( 54) 00:07:40.936 12199.778 - 12250.191: 76.4163% ( 50) 00:07:40.936 12250.191 - 12300.603: 76.7544% ( 37) 00:07:40.936 12300.603 - 12351.015: 77.1290% ( 41) 00:07:40.936 12351.015 - 12401.428: 77.5128% ( 42) 00:07:40.936 12401.428 - 12451.840: 77.9057% ( 43) 00:07:40.936 12451.840 - 12502.252: 78.2529% ( 38) 00:07:40.936 12502.252 - 12552.665: 78.5636% ( 34) 00:07:40.936 12552.665 - 12603.077: 78.8286% ( 29) 00:07:40.936 12603.077 - 12653.489: 79.1575% ( 36) 00:07:40.936 12653.489 - 12703.902: 79.4317% ( 30) 00:07:40.936 12703.902 - 12754.314: 79.6966% ( 29) 00:07:40.936 12754.314 - 12804.726: 79.9159% ( 24) 00:07:40.936 12804.726 - 12855.138: 80.1261% ( 23) 00:07:40.936 12855.138 - 12905.551: 80.2814% ( 17) 00:07:40.936 12905.551 - 13006.375: 80.5830% ( 33) 00:07:40.936 13006.375 - 13107.200: 80.7931% ( 23) 00:07:40.936 13107.200 - 13208.025: 81.1038% ( 34) 00:07:40.936 13208.025 - 13308.849: 81.5789% ( 52) 00:07:40.936 13308.849 - 13409.674: 82.0632% ( 53) 00:07:40.936 13409.674 - 13510.498: 82.5384% ( 52) 00:07:40.936 13510.498 - 13611.323: 83.0135% ( 52) 00:07:40.936 13611.323 - 13712.148: 83.4978% ( 53) 00:07:40.936 13712.148 - 13812.972: 84.0186% ( 57) 00:07:40.936 13812.972 - 13913.797: 84.4846% ( 51) 00:07:40.936 13913.797 - 14014.622: 85.0877% ( 66) 00:07:40.936 14014.622 - 14115.446: 85.8553% ( 84) 00:07:40.936 14115.446 - 14216.271: 86.4583% ( 66) 00:07:40.936 14216.271 - 14317.095: 87.0157% ( 61) 00:07:40.936 14317.095 - 14417.920: 87.5822% ( 62) 00:07:40.936 14417.920 - 14518.745: 88.1122% ( 58) 00:07:40.936 14518.745 - 14619.569: 88.6056% ( 54) 00:07:40.936 14619.569 - 14720.394: 89.1813% ( 63) 00:07:40.936 14720.394 - 14821.218: 89.7935% ( 67) 00:07:40.936 14821.218 - 14922.043: 90.4331% ( 70) 00:07:40.936 14922.043 - 15022.868: 91.0270% ( 65) 00:07:40.936 15022.868 - 15123.692: 91.3743% ( 38) 00:07:40.936 15123.692 - 15224.517: 91.7489% ( 41) 00:07:40.936 15224.517 - 15325.342: 92.1418% ( 43) 00:07:40.936 15325.342 - 15426.166: 92.4251% ( 31) 00:07:40.936 15426.166 - 15526.991: 92.7449% ( 35) 00:07:40.936 15526.991 - 15627.815: 93.1652% ( 46) 00:07:40.936 15627.815 - 15728.640: 93.5947% ( 47) 00:07:40.936 15728.640 - 15829.465: 94.0881% ( 54) 00:07:40.936 15829.465 - 15930.289: 94.4901% ( 44) 00:07:40.936 15930.289 - 16031.114: 94.9927% ( 55) 00:07:40.936 16031.114 - 16131.938: 95.3947% ( 44) 00:07:40.936 16131.938 - 16232.763: 95.7145% ( 35) 00:07:40.936 16232.763 - 16333.588: 96.0252% ( 34) 00:07:40.936 16333.588 - 16434.412: 96.3450% ( 35) 00:07:40.936 16434.412 - 16535.237: 96.5826% ( 26) 00:07:40.936 16535.237 - 16636.062: 96.8476% ( 29) 00:07:40.936 16636.062 - 16736.886: 96.9938% ( 16) 00:07:40.936 16736.886 - 16837.711: 97.1583% ( 18) 00:07:40.936 16837.711 - 16938.535: 97.3045% ( 16) 00:07:40.936 16938.535 - 17039.360: 97.5146% ( 23) 00:07:40.936 17039.360 - 17140.185: 97.6974% ( 20) 00:07:40.936 17140.185 - 17241.009: 97.9167% ( 24) 00:07:40.936 17241.009 - 17341.834: 98.1451% ( 25) 00:07:40.936 17341.834 - 17442.658: 98.3644% ( 24) 00:07:40.936 17442.658 - 17543.483: 98.5746% ( 23) 00:07:40.936 17543.483 - 17644.308: 98.7390% ( 18) 00:07:40.936 17644.308 - 17745.132: 98.9126% ( 19) 00:07:40.936 17745.132 - 17845.957: 99.0771% ( 18) 00:07:40.936 17845.957 - 17946.782: 99.2050% ( 14) 00:07:40.936 17946.782 - 18047.606: 99.2873% ( 9) 00:07:40.936 18047.606 - 18148.431: 99.3604% ( 8) 00:07:40.936 18148.431 - 18249.255: 99.4152% ( 6) 00:07:40.936 25206.154 - 25306.978: 99.4335% ( 2) 00:07:40.936 25306.978 - 25407.803: 99.4700% ( 4) 00:07:40.936 25407.803 - 25508.628: 99.5066% ( 4) 00:07:40.936 25508.628 - 25609.452: 99.5431% ( 4) 00:07:40.936 25609.452 - 25710.277: 99.5797% ( 4) 00:07:40.936 25710.277 - 25811.102: 99.6254% ( 5) 00:07:40.936 25811.102 - 26012.751: 99.6985% ( 8) 00:07:40.936 26012.751 - 26214.400: 99.7716% ( 8) 00:07:40.936 26214.400 - 26416.049: 99.8538% ( 9) 00:07:40.936 26416.049 - 26617.698: 99.9269% ( 8) 00:07:40.936 26617.698 - 26819.348: 100.0000% ( 8) 00:07:40.936 00:07:40.936 01:33:24 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:42.313 Initializing NVMe Controllers 00:07:42.313 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:42.313 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:42.313 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:42.313 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:42.313 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:42.313 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:42.313 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:42.313 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:42.313 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:42.313 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:42.313 Initialization complete. Launching workers. 00:07:42.313 ======================================================== 00:07:42.313 Latency(us) 00:07:42.313 Device Information : IOPS MiB/s Average min max 00:07:42.313 PCIE (0000:00:11.0) NSID 1 from core 0: 9445.73 110.69 13581.74 9038.13 34597.86 00:07:42.313 PCIE (0000:00:13.0) NSID 1 from core 0: 9445.73 110.69 13563.98 9001.65 33338.31 00:07:42.313 PCIE (0000:00:10.0) NSID 1 from core 0: 9445.73 110.69 13543.32 8921.96 32099.05 00:07:42.313 PCIE (0000:00:12.0) NSID 1 from core 0: 9445.73 110.69 13523.32 8968.01 30206.91 00:07:42.313 PCIE (0000:00:12.0) NSID 2 from core 0: 9445.73 110.69 13504.46 8850.70 29182.76 00:07:42.313 PCIE (0000:00:12.0) NSID 3 from core 0: 9509.55 111.44 13395.32 9103.36 22459.14 00:07:42.313 ======================================================== 00:07:42.313 Total : 56738.21 664.90 13518.55 8850.70 34597.86 00:07:42.313 00:07:42.313 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:42.313 ================================================================================= 00:07:42.313 1.00000% : 9427.102us 00:07:42.313 10.00000% : 10939.471us 00:07:42.313 25.00000% : 11846.892us 00:07:42.313 50.00000% : 13308.849us 00:07:42.313 75.00000% : 14720.394us 00:07:42.313 90.00000% : 16232.763us 00:07:42.313 95.00000% : 17543.483us 00:07:42.313 98.00000% : 18551.729us 00:07:42.313 99.00000% : 27424.295us 00:07:42.313 99.50000% : 32868.825us 00:07:42.313 99.90000% : 34482.018us 00:07:42.313 99.99000% : 34683.668us 00:07:42.313 99.99900% : 34683.668us 00:07:42.313 99.99990% : 34683.668us 00:07:42.313 99.99999% : 34683.668us 00:07:42.313 00:07:42.313 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:42.313 ================================================================================= 00:07:42.313 1.00000% : 9779.988us 00:07:42.313 10.00000% : 10889.058us 00:07:42.313 25.00000% : 11746.068us 00:07:42.313 50.00000% : 13308.849us 00:07:42.313 75.00000% : 14821.218us 00:07:42.313 90.00000% : 16131.938us 00:07:42.313 95.00000% : 17543.483us 00:07:42.313 98.00000% : 18450.905us 00:07:42.313 99.00000% : 25609.452us 00:07:42.313 99.50000% : 32263.877us 00:07:42.313 99.90000% : 33272.123us 00:07:42.313 99.99000% : 33473.772us 00:07:42.313 99.99900% : 33473.772us 00:07:42.313 99.99990% : 33473.772us 00:07:42.313 99.99999% : 33473.772us 00:07:42.313 00:07:42.313 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:42.313 ================================================================================= 00:07:42.313 1.00000% : 9628.751us 00:07:42.313 10.00000% : 10788.234us 00:07:42.313 25.00000% : 11746.068us 00:07:42.313 50.00000% : 13308.849us 00:07:42.313 75.00000% : 14821.218us 00:07:42.313 90.00000% : 16232.763us 00:07:42.313 95.00000% : 17644.308us 00:07:42.313 98.00000% : 18854.203us 00:07:42.313 99.00000% : 24097.083us 00:07:42.313 99.50000% : 30852.332us 00:07:42.313 99.90000% : 31860.578us 00:07:42.313 99.99000% : 32263.877us 00:07:42.313 99.99900% : 32263.877us 00:07:42.313 99.99990% : 32263.877us 00:07:42.313 99.99999% : 32263.877us 00:07:42.313 00:07:42.313 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:42.313 ================================================================================= 00:07:42.313 1.00000% : 9830.400us 00:07:42.314 10.00000% : 10788.234us 00:07:42.314 25.00000% : 11746.068us 00:07:42.314 50.00000% : 13308.849us 00:07:42.314 75.00000% : 14922.043us 00:07:42.314 90.00000% : 16333.588us 00:07:42.314 95.00000% : 17341.834us 00:07:42.314 98.00000% : 18753.378us 00:07:42.314 99.00000% : 22584.714us 00:07:42.314 99.50000% : 29239.138us 00:07:42.314 99.90000% : 30045.735us 00:07:42.314 99.99000% : 30247.385us 00:07:42.314 99.99900% : 30247.385us 00:07:42.314 99.99990% : 30247.385us 00:07:42.314 99.99999% : 30247.385us 00:07:42.314 00:07:42.314 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:42.314 ================================================================================= 00:07:42.314 1.00000% : 9578.338us 00:07:42.314 10.00000% : 10838.646us 00:07:42.314 25.00000% : 11897.305us 00:07:42.314 50.00000% : 13208.025us 00:07:42.314 75.00000% : 14922.043us 00:07:42.314 90.00000% : 16434.412us 00:07:42.314 95.00000% : 17341.834us 00:07:42.314 98.00000% : 18753.378us 00:07:42.314 99.00000% : 21778.117us 00:07:42.314 99.50000% : 28029.243us 00:07:42.314 99.90000% : 29037.489us 00:07:42.314 99.99000% : 29239.138us 00:07:42.314 99.99900% : 29239.138us 00:07:42.314 99.99990% : 29239.138us 00:07:42.314 99.99999% : 29239.138us 00:07:42.314 00:07:42.314 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:42.314 ================================================================================= 00:07:42.314 1.00000% : 9679.163us 00:07:42.314 10.00000% : 10838.646us 00:07:42.314 25.00000% : 11897.305us 00:07:42.314 50.00000% : 13208.025us 00:07:42.314 75.00000% : 14821.218us 00:07:42.314 90.00000% : 16131.938us 00:07:42.314 95.00000% : 17039.360us 00:07:42.314 98.00000% : 18350.080us 00:07:42.314 99.00000% : 18955.028us 00:07:42.314 99.50000% : 21374.818us 00:07:42.314 99.90000% : 22282.240us 00:07:42.314 99.99000% : 22483.889us 00:07:42.314 99.99900% : 22483.889us 00:07:42.314 99.99990% : 22483.889us 00:07:42.314 99.99999% : 22483.889us 00:07:42.314 00:07:42.314 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:42.314 ============================================================================== 00:07:42.314 Range in us Cumulative IO count 00:07:42.314 9023.803 - 9074.215: 0.0106% ( 1) 00:07:42.314 9074.215 - 9124.628: 0.0211% ( 1) 00:07:42.314 9124.628 - 9175.040: 0.0317% ( 1) 00:07:42.314 9175.040 - 9225.452: 0.1161% ( 8) 00:07:42.314 9225.452 - 9275.865: 0.3062% ( 18) 00:07:42.314 9275.865 - 9326.277: 0.5384% ( 22) 00:07:42.314 9326.277 - 9376.689: 0.8657% ( 31) 00:07:42.314 9376.689 - 9427.102: 1.2352% ( 35) 00:07:42.314 9427.102 - 9477.514: 1.5519% ( 30) 00:07:42.314 9477.514 - 9527.926: 1.8053% ( 24) 00:07:42.314 9527.926 - 9578.338: 2.0270% ( 21) 00:07:42.314 9578.338 - 9628.751: 2.2910% ( 25) 00:07:42.314 9628.751 - 9679.163: 2.5866% ( 28) 00:07:42.314 9679.163 - 9729.575: 2.7555% ( 16) 00:07:42.314 9729.575 - 9779.988: 2.9455% ( 18) 00:07:42.314 9779.988 - 9830.400: 3.0511% ( 10) 00:07:42.314 9830.400 - 9880.812: 3.1567% ( 10) 00:07:42.314 9880.812 - 9931.225: 3.2728% ( 11) 00:07:42.314 9931.225 - 9981.637: 3.4312% ( 15) 00:07:42.314 9981.637 - 10032.049: 3.5895% ( 15) 00:07:42.314 10032.049 - 10082.462: 3.7479% ( 15) 00:07:42.314 10082.462 - 10132.874: 3.9907% ( 23) 00:07:42.314 10132.874 - 10183.286: 4.1807% ( 18) 00:07:42.314 10183.286 - 10233.698: 4.3813% ( 19) 00:07:42.314 10233.698 - 10284.111: 4.6136% ( 22) 00:07:42.314 10284.111 - 10334.523: 4.8036% ( 18) 00:07:42.314 10334.523 - 10384.935: 5.1204% ( 30) 00:07:42.314 10384.935 - 10435.348: 5.3843% ( 25) 00:07:42.314 10435.348 - 10485.760: 5.5427% ( 15) 00:07:42.314 10485.760 - 10536.172: 5.7538% ( 20) 00:07:42.314 10536.172 - 10586.585: 6.1128% ( 34) 00:07:42.314 10586.585 - 10636.997: 6.6829% ( 54) 00:07:42.314 10636.997 - 10687.409: 7.0946% ( 39) 00:07:42.314 10687.409 - 10737.822: 7.5697% ( 45) 00:07:42.314 10737.822 - 10788.234: 7.9814% ( 39) 00:07:42.314 10788.234 - 10838.646: 8.8155% ( 79) 00:07:42.314 10838.646 - 10889.058: 9.3644% ( 52) 00:07:42.314 10889.058 - 10939.471: 10.0190% ( 62) 00:07:42.314 10939.471 - 10989.883: 11.1486% ( 107) 00:07:42.314 10989.883 - 11040.295: 12.0038% ( 81) 00:07:42.314 11040.295 - 11090.708: 12.9434% ( 89) 00:07:42.314 11090.708 - 11141.120: 13.6402% ( 66) 00:07:42.314 11141.120 - 11191.532: 14.3792% ( 70) 00:07:42.314 11191.532 - 11241.945: 15.3927% ( 96) 00:07:42.314 11241.945 - 11292.357: 16.2479% ( 81) 00:07:42.314 11292.357 - 11342.769: 17.3247% ( 102) 00:07:42.314 11342.769 - 11393.182: 18.2855% ( 91) 00:07:42.314 11393.182 - 11443.594: 19.1090% ( 78) 00:07:42.314 11443.594 - 11494.006: 19.7741% ( 63) 00:07:42.314 11494.006 - 11544.418: 20.4709% ( 66) 00:07:42.314 11544.418 - 11594.831: 21.2204% ( 71) 00:07:42.314 11594.831 - 11645.243: 22.3290% ( 105) 00:07:42.314 11645.243 - 11695.655: 23.1630% ( 79) 00:07:42.314 11695.655 - 11746.068: 23.9126% ( 71) 00:07:42.314 11746.068 - 11796.480: 24.8944% ( 93) 00:07:42.314 11796.480 - 11846.892: 25.7707% ( 83) 00:07:42.314 11846.892 - 11897.305: 26.4041% ( 60) 00:07:42.314 11897.305 - 11947.717: 26.9109% ( 48) 00:07:42.314 11947.717 - 11998.129: 27.6182% ( 67) 00:07:42.314 11998.129 - 12048.542: 28.0722% ( 43) 00:07:42.314 12048.542 - 12098.954: 28.5367% ( 44) 00:07:42.314 12098.954 - 12149.366: 29.2335% ( 66) 00:07:42.314 12149.366 - 12199.778: 29.9303% ( 66) 00:07:42.314 12199.778 - 12250.191: 30.6060% ( 64) 00:07:42.314 12250.191 - 12300.603: 31.5139% ( 86) 00:07:42.314 12300.603 - 12351.015: 32.4113% ( 85) 00:07:42.314 12351.015 - 12401.428: 33.2242% ( 77) 00:07:42.314 12401.428 - 12451.840: 33.9316% ( 67) 00:07:42.314 12451.840 - 12502.252: 34.9134% ( 93) 00:07:42.314 12502.252 - 12552.665: 35.7791% ( 82) 00:07:42.314 12552.665 - 12603.077: 36.8771% ( 104) 00:07:42.314 12603.077 - 12653.489: 37.7006% ( 78) 00:07:42.314 12653.489 - 12703.902: 38.5452% ( 80) 00:07:42.314 12703.902 - 12754.314: 39.4320% ( 84) 00:07:42.314 12754.314 - 12804.726: 40.2977% ( 82) 00:07:42.314 12804.726 - 12855.138: 41.3218% ( 97) 00:07:42.314 12855.138 - 12905.551: 42.2086% ( 84) 00:07:42.314 12905.551 - 13006.375: 44.3623% ( 204) 00:07:42.314 13006.375 - 13107.200: 46.5794% ( 210) 00:07:42.314 13107.200 - 13208.025: 48.8281% ( 213) 00:07:42.314 13208.025 - 13308.849: 50.8552% ( 192) 00:07:42.314 13308.849 - 13409.674: 53.3467% ( 236) 00:07:42.314 13409.674 - 13510.498: 55.4476% ( 199) 00:07:42.314 13510.498 - 13611.323: 57.5169% ( 196) 00:07:42.314 13611.323 - 13712.148: 59.4383% ( 182) 00:07:42.314 13712.148 - 13812.972: 61.2120% ( 168) 00:07:42.314 13812.972 - 13913.797: 63.0173% ( 171) 00:07:42.314 13913.797 - 14014.622: 64.7171% ( 161) 00:07:42.314 14014.622 - 14115.446: 66.0684% ( 128) 00:07:42.314 14115.446 - 14216.271: 67.5992% ( 145) 00:07:42.314 14216.271 - 14317.095: 68.9295% ( 126) 00:07:42.314 14317.095 - 14417.920: 70.2914% ( 129) 00:07:42.314 14417.920 - 14518.745: 72.0756% ( 169) 00:07:42.314 14518.745 - 14619.569: 73.6803% ( 152) 00:07:42.314 14619.569 - 14720.394: 75.0000% ( 125) 00:07:42.314 14720.394 - 14821.218: 76.1930% ( 113) 00:07:42.314 14821.218 - 14922.043: 77.3860% ( 113) 00:07:42.314 14922.043 - 15022.868: 78.3045% ( 87) 00:07:42.314 15022.868 - 15123.692: 79.3074% ( 95) 00:07:42.314 15123.692 - 15224.517: 80.3315% ( 97) 00:07:42.314 15224.517 - 15325.342: 81.4189% ( 103) 00:07:42.314 15325.342 - 15426.166: 82.4430% ( 97) 00:07:42.314 15426.166 - 15526.991: 83.7838% ( 127) 00:07:42.314 15526.991 - 15627.815: 84.9029% ( 106) 00:07:42.314 15627.815 - 15728.640: 86.3176% ( 134) 00:07:42.314 15728.640 - 15829.465: 87.3522% ( 98) 00:07:42.314 15829.465 - 15930.289: 88.1334% ( 74) 00:07:42.314 15930.289 - 16031.114: 88.9464% ( 77) 00:07:42.314 16031.114 - 16131.938: 89.7171% ( 73) 00:07:42.314 16131.938 - 16232.763: 90.3505% ( 60) 00:07:42.314 16232.763 - 16333.588: 91.0895% ( 70) 00:07:42.314 16333.588 - 16434.412: 91.6068% ( 49) 00:07:42.314 16434.412 - 16535.237: 92.0714% ( 44) 00:07:42.314 16535.237 - 16636.062: 92.4937% ( 40) 00:07:42.314 16636.062 - 16736.886: 92.9160% ( 40) 00:07:42.314 16736.886 - 16837.711: 93.2221% ( 29) 00:07:42.314 16837.711 - 16938.535: 93.4966% ( 26) 00:07:42.314 16938.535 - 17039.360: 93.7183% ( 21) 00:07:42.314 17039.360 - 17140.185: 93.9189% ( 19) 00:07:42.314 17140.185 - 17241.009: 94.3307% ( 39) 00:07:42.314 17241.009 - 17341.834: 94.6685% ( 32) 00:07:42.314 17341.834 - 17442.658: 94.9747% ( 29) 00:07:42.314 17442.658 - 17543.483: 95.2386% ( 25) 00:07:42.314 17543.483 - 17644.308: 95.4392% ( 19) 00:07:42.314 17644.308 - 17745.132: 95.7559% ( 30) 00:07:42.314 17745.132 - 17845.957: 96.1677% ( 39) 00:07:42.314 17845.957 - 17946.782: 96.5688% ( 38) 00:07:42.314 17946.782 - 18047.606: 96.8433% ( 26) 00:07:42.314 18047.606 - 18148.431: 97.1284% ( 27) 00:07:42.314 18148.431 - 18249.255: 97.4240% ( 28) 00:07:42.314 18249.255 - 18350.080: 97.6879% ( 25) 00:07:42.314 18350.080 - 18450.905: 97.9096% ( 21) 00:07:42.314 18450.905 - 18551.729: 98.1524% ( 23) 00:07:42.314 18551.729 - 18652.554: 98.2897% ( 13) 00:07:42.314 18652.554 - 18753.378: 98.3847% ( 9) 00:07:42.314 18753.378 - 18854.203: 98.4903% ( 10) 00:07:42.314 18854.203 - 18955.028: 98.5642% ( 7) 00:07:42.314 18955.028 - 19055.852: 98.5959% ( 3) 00:07:42.314 19055.852 - 19156.677: 98.6170% ( 2) 00:07:42.314 19156.677 - 19257.502: 98.6486% ( 3) 00:07:42.314 26416.049 - 26617.698: 98.6592% ( 1) 00:07:42.314 26819.348 - 27020.997: 98.8070% ( 14) 00:07:42.314 27020.997 - 27222.646: 98.9337% ( 12) 00:07:42.315 27222.646 - 27424.295: 99.0182% ( 8) 00:07:42.315 27424.295 - 27625.945: 99.1026% ( 8) 00:07:42.315 27625.945 - 27827.594: 99.1871% ( 8) 00:07:42.315 27827.594 - 28029.243: 99.2715% ( 8) 00:07:42.315 28029.243 - 28230.892: 99.3243% ( 5) 00:07:42.315 32465.526 - 32667.175: 99.4932% ( 16) 00:07:42.315 32667.175 - 32868.825: 99.5671% ( 7) 00:07:42.315 33473.772 - 33675.422: 99.6094% ( 4) 00:07:42.315 33675.422 - 33877.071: 99.6938% ( 8) 00:07:42.315 33877.071 - 34078.720: 99.7783% ( 8) 00:07:42.315 34078.720 - 34280.369: 99.8628% ( 8) 00:07:42.315 34280.369 - 34482.018: 99.9472% ( 8) 00:07:42.315 34482.018 - 34683.668: 100.0000% ( 5) 00:07:42.315 00:07:42.315 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:42.315 ============================================================================== 00:07:42.315 Range in us Cumulative IO count 00:07:42.315 8973.391 - 9023.803: 0.0211% ( 2) 00:07:42.315 9023.803 - 9074.215: 0.0739% ( 5) 00:07:42.315 9074.215 - 9124.628: 0.0950% ( 2) 00:07:42.315 9124.628 - 9175.040: 0.1478% ( 5) 00:07:42.315 9175.040 - 9225.452: 0.2323% ( 8) 00:07:42.315 9225.452 - 9275.865: 0.3167% ( 8) 00:07:42.315 9275.865 - 9326.277: 0.4645% ( 14) 00:07:42.315 9326.277 - 9376.689: 0.5279% ( 6) 00:07:42.315 9376.689 - 9427.102: 0.5912% ( 6) 00:07:42.315 9427.102 - 9477.514: 0.6229% ( 3) 00:07:42.315 9477.514 - 9527.926: 0.6651% ( 4) 00:07:42.315 9527.926 - 9578.338: 0.7073% ( 4) 00:07:42.315 9578.338 - 9628.751: 0.7707% ( 6) 00:07:42.315 9628.751 - 9679.163: 0.8552% ( 8) 00:07:42.315 9679.163 - 9729.575: 0.9713% ( 11) 00:07:42.315 9729.575 - 9779.988: 1.1296% ( 15) 00:07:42.315 9779.988 - 9830.400: 1.5203% ( 37) 00:07:42.315 9830.400 - 9880.812: 1.8476% ( 31) 00:07:42.315 9880.812 - 9931.225: 2.0904% ( 23) 00:07:42.315 9931.225 - 9981.637: 2.4177% ( 31) 00:07:42.315 9981.637 - 10032.049: 2.6816% ( 25) 00:07:42.315 10032.049 - 10082.462: 3.0194% ( 32) 00:07:42.315 10082.462 - 10132.874: 3.2095% ( 18) 00:07:42.315 10132.874 - 10183.286: 3.4101% ( 19) 00:07:42.315 10183.286 - 10233.698: 3.6740% ( 25) 00:07:42.315 10233.698 - 10284.111: 3.9907% ( 30) 00:07:42.315 10284.111 - 10334.523: 4.4764% ( 46) 00:07:42.315 10334.523 - 10384.935: 4.6769% ( 19) 00:07:42.315 10384.935 - 10435.348: 4.8564% ( 17) 00:07:42.315 10435.348 - 10485.760: 5.0781% ( 21) 00:07:42.315 10485.760 - 10536.172: 5.4899% ( 39) 00:07:42.315 10536.172 - 10586.585: 5.9122% ( 40) 00:07:42.315 10586.585 - 10636.997: 6.2606% ( 33) 00:07:42.315 10636.997 - 10687.409: 6.7568% ( 47) 00:07:42.315 10687.409 - 10737.822: 7.4113% ( 62) 00:07:42.315 10737.822 - 10788.234: 8.1398% ( 69) 00:07:42.315 10788.234 - 10838.646: 9.2061% ( 101) 00:07:42.315 10838.646 - 10889.058: 10.2090% ( 95) 00:07:42.315 10889.058 - 10939.471: 11.0747% ( 82) 00:07:42.315 10939.471 - 10989.883: 12.0671% ( 94) 00:07:42.315 10989.883 - 11040.295: 12.8906% ( 78) 00:07:42.315 11040.295 - 11090.708: 13.7035% ( 77) 00:07:42.315 11090.708 - 11141.120: 14.4426% ( 70) 00:07:42.315 11141.120 - 11191.532: 15.0549% ( 58) 00:07:42.315 11191.532 - 11241.945: 15.8361% ( 74) 00:07:42.315 11241.945 - 11292.357: 16.5963% ( 72) 00:07:42.315 11292.357 - 11342.769: 17.5042% ( 86) 00:07:42.315 11342.769 - 11393.182: 18.3910% ( 84) 00:07:42.315 11393.182 - 11443.594: 19.1406% ( 71) 00:07:42.315 11443.594 - 11494.006: 20.0802% ( 89) 00:07:42.315 11494.006 - 11544.418: 21.0093% ( 88) 00:07:42.315 11544.418 - 11594.831: 21.9278% ( 87) 00:07:42.315 11594.831 - 11645.243: 23.0046% ( 102) 00:07:42.315 11645.243 - 11695.655: 23.9126% ( 86) 00:07:42.315 11695.655 - 11746.068: 25.0211% ( 105) 00:07:42.315 11746.068 - 11796.480: 25.6757% ( 62) 00:07:42.315 11796.480 - 11846.892: 26.2352% ( 53) 00:07:42.315 11846.892 - 11897.305: 26.6997% ( 44) 00:07:42.315 11897.305 - 11947.717: 27.1854% ( 46) 00:07:42.315 11947.717 - 11998.129: 27.7344% ( 52) 00:07:42.315 11998.129 - 12048.542: 28.3361% ( 57) 00:07:42.315 12048.542 - 12098.954: 28.9485% ( 58) 00:07:42.315 12098.954 - 12149.366: 29.5819% ( 60) 00:07:42.315 12149.366 - 12199.778: 30.2259% ( 61) 00:07:42.315 12199.778 - 12250.191: 30.9122% ( 65) 00:07:42.315 12250.191 - 12300.603: 31.5139% ( 57) 00:07:42.315 12300.603 - 12351.015: 32.3902% ( 83) 00:07:42.315 12351.015 - 12401.428: 33.2876% ( 85) 00:07:42.315 12401.428 - 12451.840: 34.1322% ( 80) 00:07:42.315 12451.840 - 12502.252: 34.9768% ( 80) 00:07:42.315 12502.252 - 12552.665: 36.0008% ( 97) 00:07:42.315 12552.665 - 12603.077: 37.0355% ( 98) 00:07:42.315 12603.077 - 12653.489: 37.8378% ( 76) 00:07:42.315 12653.489 - 12703.902: 38.6085% ( 73) 00:07:42.315 12703.902 - 12754.314: 39.4531% ( 80) 00:07:42.315 12754.314 - 12804.726: 40.3611% ( 86) 00:07:42.315 12804.726 - 12855.138: 41.5541% ( 113) 00:07:42.315 12855.138 - 12905.551: 42.5253% ( 92) 00:07:42.315 12905.551 - 13006.375: 44.4679% ( 184) 00:07:42.315 13006.375 - 13107.200: 46.7166% ( 213) 00:07:42.315 13107.200 - 13208.025: 48.8915% ( 206) 00:07:42.315 13208.025 - 13308.849: 51.4464% ( 242) 00:07:42.315 13308.849 - 13409.674: 53.2728% ( 173) 00:07:42.315 13409.674 - 13510.498: 55.3104% ( 193) 00:07:42.315 13510.498 - 13611.323: 57.4958% ( 207) 00:07:42.315 13611.323 - 13712.148: 59.3856% ( 179) 00:07:42.315 13712.148 - 13812.972: 60.7369% ( 128) 00:07:42.315 13812.972 - 13913.797: 62.1199% ( 131) 00:07:42.315 13913.797 - 14014.622: 63.8302% ( 162) 00:07:42.315 14014.622 - 14115.446: 65.3716% ( 146) 00:07:42.315 14115.446 - 14216.271: 66.8074% ( 136) 00:07:42.315 14216.271 - 14317.095: 68.1905% ( 131) 00:07:42.315 14317.095 - 14417.920: 70.2597% ( 196) 00:07:42.315 14417.920 - 14518.745: 71.9700% ( 162) 00:07:42.315 14518.745 - 14619.569: 73.5747% ( 152) 00:07:42.315 14619.569 - 14720.394: 74.9367% ( 129) 00:07:42.315 14720.394 - 14821.218: 76.4358% ( 142) 00:07:42.315 14821.218 - 14922.043: 77.7660% ( 126) 00:07:42.315 14922.043 - 15022.868: 79.0857% ( 125) 00:07:42.315 15022.868 - 15123.692: 80.1098% ( 97) 00:07:42.315 15123.692 - 15224.517: 81.0494% ( 89) 00:07:42.315 15224.517 - 15325.342: 82.0418% ( 94) 00:07:42.315 15325.342 - 15426.166: 82.8864% ( 80) 00:07:42.315 15426.166 - 15526.991: 83.7732% ( 84) 00:07:42.315 15526.991 - 15627.815: 84.6706% ( 85) 00:07:42.315 15627.815 - 15728.640: 85.6313% ( 91) 00:07:42.315 15728.640 - 15829.465: 86.8666% ( 117) 00:07:42.315 15829.465 - 15930.289: 87.9223% ( 100) 00:07:42.315 15930.289 - 16031.114: 89.1364% ( 115) 00:07:42.315 16031.114 - 16131.938: 90.0021% ( 82) 00:07:42.315 16131.938 - 16232.763: 90.6144% ( 58) 00:07:42.315 16232.763 - 16333.588: 91.2057% ( 56) 00:07:42.315 16333.588 - 16434.412: 91.8391% ( 60) 00:07:42.315 16434.412 - 16535.237: 92.2403% ( 38) 00:07:42.315 16535.237 - 16636.062: 92.6204% ( 36) 00:07:42.315 16636.062 - 16736.886: 92.9371% ( 30) 00:07:42.315 16736.886 - 16837.711: 93.2221% ( 27) 00:07:42.315 16837.711 - 16938.535: 93.4122% ( 18) 00:07:42.315 16938.535 - 17039.360: 93.6233% ( 20) 00:07:42.315 17039.360 - 17140.185: 93.8872% ( 25) 00:07:42.315 17140.185 - 17241.009: 94.1301% ( 23) 00:07:42.315 17241.009 - 17341.834: 94.4257% ( 28) 00:07:42.315 17341.834 - 17442.658: 94.8057% ( 36) 00:07:42.315 17442.658 - 17543.483: 95.1858% ( 36) 00:07:42.315 17543.483 - 17644.308: 95.5131% ( 31) 00:07:42.315 17644.308 - 17745.132: 95.9565% ( 42) 00:07:42.315 17745.132 - 17845.957: 96.3260% ( 35) 00:07:42.315 17845.957 - 17946.782: 96.7272% ( 38) 00:07:42.315 17946.782 - 18047.606: 97.0545% ( 31) 00:07:42.315 18047.606 - 18148.431: 97.3501% ( 28) 00:07:42.315 18148.431 - 18249.255: 97.6562% ( 29) 00:07:42.315 18249.255 - 18350.080: 97.9096% ( 24) 00:07:42.315 18350.080 - 18450.905: 98.1419% ( 22) 00:07:42.315 18450.905 - 18551.729: 98.3742% ( 22) 00:07:42.315 18551.729 - 18652.554: 98.5008% ( 12) 00:07:42.315 18652.554 - 18753.378: 98.5642% ( 6) 00:07:42.315 18753.378 - 18854.203: 98.6170% ( 5) 00:07:42.315 18854.203 - 18955.028: 98.6486% ( 3) 00:07:42.315 24601.206 - 24702.031: 98.6592% ( 1) 00:07:42.315 24702.031 - 24802.855: 98.7014% ( 4) 00:07:42.315 24802.855 - 24903.680: 98.7437% ( 4) 00:07:42.315 24903.680 - 25004.505: 98.7859% ( 4) 00:07:42.315 25004.505 - 25105.329: 98.8281% ( 4) 00:07:42.315 25105.329 - 25206.154: 98.8704% ( 4) 00:07:42.315 25206.154 - 25306.978: 98.9126% ( 4) 00:07:42.315 25306.978 - 25407.803: 98.9548% ( 4) 00:07:42.315 25407.803 - 25508.628: 98.9970% ( 4) 00:07:42.315 25508.628 - 25609.452: 99.0393% ( 4) 00:07:42.315 25609.452 - 25710.277: 99.0815% ( 4) 00:07:42.315 25710.277 - 25811.102: 99.1237% ( 4) 00:07:42.315 25811.102 - 26012.751: 99.2082% ( 8) 00:07:42.315 26012.751 - 26214.400: 99.3032% ( 9) 00:07:42.315 26214.400 - 26416.049: 99.3243% ( 2) 00:07:42.315 31658.929 - 31860.578: 99.3771% ( 5) 00:07:42.315 31860.578 - 32062.228: 99.4721% ( 9) 00:07:42.315 32062.228 - 32263.877: 99.5460% ( 7) 00:07:42.315 32263.877 - 32465.526: 99.6305% ( 8) 00:07:42.315 32465.526 - 32667.175: 99.7149% ( 8) 00:07:42.315 32667.175 - 32868.825: 99.7994% ( 8) 00:07:42.315 32868.825 - 33070.474: 99.8839% ( 8) 00:07:42.315 33070.474 - 33272.123: 99.9683% ( 8) 00:07:42.315 33272.123 - 33473.772: 100.0000% ( 3) 00:07:42.315 00:07:42.315 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:42.315 ============================================================================== 00:07:42.315 Range in us Cumulative IO count 00:07:42.315 8872.566 - 8922.978: 0.0106% ( 1) 00:07:42.315 8922.978 - 8973.391: 0.0528% ( 4) 00:07:42.315 8973.391 - 9023.803: 0.0950% ( 4) 00:07:42.315 9023.803 - 9074.215: 0.2006% ( 10) 00:07:42.315 9074.215 - 9124.628: 0.2534% ( 5) 00:07:42.315 9124.628 - 9175.040: 0.2851% ( 3) 00:07:42.316 9175.040 - 9225.452: 0.3484% ( 6) 00:07:42.316 9225.452 - 9275.865: 0.3906% ( 4) 00:07:42.316 9275.865 - 9326.277: 0.4434% ( 5) 00:07:42.316 9326.277 - 9376.689: 0.5384% ( 9) 00:07:42.316 9376.689 - 9427.102: 0.5912% ( 5) 00:07:42.316 9427.102 - 9477.514: 0.7390% ( 14) 00:07:42.316 9477.514 - 9527.926: 0.8340% ( 9) 00:07:42.316 9527.926 - 9578.338: 0.8974% ( 6) 00:07:42.316 9578.338 - 9628.751: 1.0030% ( 10) 00:07:42.316 9628.751 - 9679.163: 1.1085% ( 10) 00:07:42.316 9679.163 - 9729.575: 1.3091% ( 19) 00:07:42.316 9729.575 - 9779.988: 1.5097% ( 19) 00:07:42.316 9779.988 - 9830.400: 1.7420% ( 22) 00:07:42.316 9830.400 - 9880.812: 2.0798% ( 32) 00:07:42.316 9880.812 - 9931.225: 2.4071% ( 31) 00:07:42.316 9931.225 - 9981.637: 2.5866% ( 17) 00:07:42.316 9981.637 - 10032.049: 2.8822% ( 28) 00:07:42.316 10032.049 - 10082.462: 3.1039% ( 21) 00:07:42.316 10082.462 - 10132.874: 3.3045% ( 19) 00:07:42.316 10132.874 - 10183.286: 3.5156% ( 20) 00:07:42.316 10183.286 - 10233.698: 3.8007% ( 27) 00:07:42.316 10233.698 - 10284.111: 4.0857% ( 27) 00:07:42.316 10284.111 - 10334.523: 4.3708% ( 27) 00:07:42.316 10334.523 - 10384.935: 4.7086% ( 32) 00:07:42.316 10384.935 - 10435.348: 5.0570% ( 33) 00:07:42.316 10435.348 - 10485.760: 5.5427% ( 46) 00:07:42.316 10485.760 - 10536.172: 6.1972% ( 62) 00:07:42.316 10536.172 - 10586.585: 6.8412% ( 61) 00:07:42.316 10586.585 - 10636.997: 7.5380% ( 66) 00:07:42.316 10636.997 - 10687.409: 8.4354% ( 85) 00:07:42.316 10687.409 - 10737.822: 9.3117% ( 83) 00:07:42.316 10737.822 - 10788.234: 10.3041% ( 94) 00:07:42.316 10788.234 - 10838.646: 11.0431% ( 70) 00:07:42.316 10838.646 - 10889.058: 11.6660% ( 59) 00:07:42.316 10889.058 - 10939.471: 12.1094% ( 42) 00:07:42.316 10939.471 - 10989.883: 12.5845% ( 45) 00:07:42.316 10989.883 - 11040.295: 12.9434% ( 34) 00:07:42.316 11040.295 - 11090.708: 13.6296% ( 65) 00:07:42.316 11090.708 - 11141.120: 14.2736% ( 61) 00:07:42.316 11141.120 - 11191.532: 15.0127% ( 70) 00:07:42.316 11191.532 - 11241.945: 15.9734% ( 91) 00:07:42.316 11241.945 - 11292.357: 16.8813% ( 86) 00:07:42.316 11292.357 - 11342.769: 17.9688% ( 103) 00:07:42.316 11342.769 - 11393.182: 18.9295% ( 91) 00:07:42.316 11393.182 - 11443.594: 19.8902% ( 91) 00:07:42.316 11443.594 - 11494.006: 20.9143% ( 97) 00:07:42.316 11494.006 - 11544.418: 22.0228% ( 105) 00:07:42.316 11544.418 - 11594.831: 23.0785% ( 100) 00:07:42.316 11594.831 - 11645.243: 23.9970% ( 87) 00:07:42.316 11645.243 - 11695.655: 24.6833% ( 65) 00:07:42.316 11695.655 - 11746.068: 25.3906% ( 67) 00:07:42.316 11746.068 - 11796.480: 26.0874% ( 66) 00:07:42.316 11796.480 - 11846.892: 26.6786% ( 56) 00:07:42.316 11846.892 - 11897.305: 27.4071% ( 69) 00:07:42.316 11897.305 - 11947.717: 28.1567% ( 71) 00:07:42.316 11947.717 - 11998.129: 28.7479% ( 56) 00:07:42.316 11998.129 - 12048.542: 29.4447% ( 66) 00:07:42.316 12048.542 - 12098.954: 30.1520% ( 67) 00:07:42.316 12098.954 - 12149.366: 30.7749% ( 59) 00:07:42.316 12149.366 - 12199.778: 31.3556% ( 55) 00:07:42.316 12199.778 - 12250.191: 31.9362% ( 55) 00:07:42.316 12250.191 - 12300.603: 32.4641% ( 50) 00:07:42.316 12300.603 - 12351.015: 33.1609% ( 66) 00:07:42.316 12351.015 - 12401.428: 33.8260% ( 63) 00:07:42.316 12401.428 - 12451.840: 34.4278% ( 57) 00:07:42.316 12451.840 - 12502.252: 34.9557% ( 50) 00:07:42.316 12502.252 - 12552.665: 35.5997% ( 61) 00:07:42.316 12552.665 - 12603.077: 36.5921% ( 94) 00:07:42.316 12603.077 - 12653.489: 37.3628% ( 73) 00:07:42.316 12653.489 - 12703.902: 38.1440% ( 74) 00:07:42.316 12703.902 - 12754.314: 38.8408% ( 66) 00:07:42.316 12754.314 - 12804.726: 39.7487% ( 86) 00:07:42.316 12804.726 - 12855.138: 41.0473% ( 123) 00:07:42.316 12855.138 - 12905.551: 42.2508% ( 114) 00:07:42.316 12905.551 - 13006.375: 44.5101% ( 214) 00:07:42.316 13006.375 - 13107.200: 46.8117% ( 218) 00:07:42.316 13107.200 - 13208.025: 49.0604% ( 213) 00:07:42.316 13208.025 - 13308.849: 51.5308% ( 234) 00:07:42.316 13308.849 - 13409.674: 53.7373% ( 209) 00:07:42.316 13409.674 - 13510.498: 55.9649% ( 211) 00:07:42.316 13510.498 - 13611.323: 57.9709% ( 190) 00:07:42.316 13611.323 - 13712.148: 60.0084% ( 193) 00:07:42.316 13712.148 - 13812.972: 61.6448% ( 155) 00:07:42.316 13812.972 - 13913.797: 63.0490% ( 133) 00:07:42.316 13913.797 - 14014.622: 64.4637% ( 134) 00:07:42.316 14014.622 - 14115.446: 66.2584% ( 170) 00:07:42.316 14115.446 - 14216.271: 67.5465% ( 122) 00:07:42.316 14216.271 - 14317.095: 69.0667% ( 144) 00:07:42.316 14317.095 - 14417.920: 70.2808% ( 115) 00:07:42.316 14417.920 - 14518.745: 71.7378% ( 138) 00:07:42.316 14518.745 - 14619.569: 72.9413% ( 114) 00:07:42.316 14619.569 - 14720.394: 74.2610% ( 125) 00:07:42.316 14720.394 - 14821.218: 75.4962% ( 117) 00:07:42.316 14821.218 - 14922.043: 76.7314% ( 117) 00:07:42.316 14922.043 - 15022.868: 77.9772% ( 118) 00:07:42.316 15022.868 - 15123.692: 79.0329% ( 100) 00:07:42.316 15123.692 - 15224.517: 80.3421% ( 124) 00:07:42.316 15224.517 - 15325.342: 81.5139% ( 111) 00:07:42.316 15325.342 - 15426.166: 82.6541% ( 108) 00:07:42.316 15426.166 - 15526.991: 83.6993% ( 99) 00:07:42.316 15526.991 - 15627.815: 84.6073% ( 86) 00:07:42.316 15627.815 - 15728.640: 85.5680% ( 91) 00:07:42.316 15728.640 - 15829.465: 86.6026% ( 98) 00:07:42.316 15829.465 - 15930.289: 87.4578% ( 81) 00:07:42.316 15930.289 - 16031.114: 88.5557% ( 104) 00:07:42.316 16031.114 - 16131.938: 89.5165% ( 91) 00:07:42.316 16131.938 - 16232.763: 90.2660% ( 71) 00:07:42.316 16232.763 - 16333.588: 90.8995% ( 60) 00:07:42.316 16333.588 - 16434.412: 91.3640% ( 44) 00:07:42.316 16434.412 - 16535.237: 91.8391% ( 45) 00:07:42.316 16535.237 - 16636.062: 92.2297% ( 37) 00:07:42.316 16636.062 - 16736.886: 92.5465% ( 30) 00:07:42.316 16736.886 - 16837.711: 92.7787% ( 22) 00:07:42.316 16837.711 - 16938.535: 93.1271% ( 33) 00:07:42.316 16938.535 - 17039.360: 93.4333% ( 29) 00:07:42.316 17039.360 - 17140.185: 93.7078% ( 26) 00:07:42.316 17140.185 - 17241.009: 94.0351% ( 31) 00:07:42.316 17241.009 - 17341.834: 94.4679% ( 41) 00:07:42.316 17341.834 - 17442.658: 94.7530% ( 27) 00:07:42.316 17442.658 - 17543.483: 94.9535% ( 19) 00:07:42.316 17543.483 - 17644.308: 95.1964% ( 23) 00:07:42.316 17644.308 - 17745.132: 95.5236% ( 31) 00:07:42.316 17745.132 - 17845.957: 95.7242% ( 19) 00:07:42.316 17845.957 - 17946.782: 96.0198% ( 28) 00:07:42.316 17946.782 - 18047.606: 96.2838% ( 25) 00:07:42.316 18047.606 - 18148.431: 96.4844% ( 19) 00:07:42.316 18148.431 - 18249.255: 96.8117% ( 31) 00:07:42.316 18249.255 - 18350.080: 97.1495% ( 32) 00:07:42.316 18350.080 - 18450.905: 97.3079% ( 15) 00:07:42.316 18450.905 - 18551.729: 97.4873% ( 17) 00:07:42.316 18551.729 - 18652.554: 97.6668% ( 17) 00:07:42.316 18652.554 - 18753.378: 97.8885% ( 21) 00:07:42.316 18753.378 - 18854.203: 98.0785% ( 18) 00:07:42.316 18854.203 - 18955.028: 98.2158% ( 13) 00:07:42.316 18955.028 - 19055.852: 98.3953% ( 17) 00:07:42.316 19055.852 - 19156.677: 98.5325% ( 13) 00:07:42.316 19156.677 - 19257.502: 98.6275% ( 9) 00:07:42.316 19257.502 - 19358.326: 98.6486% ( 2) 00:07:42.316 23088.837 - 23189.662: 98.6803% ( 3) 00:07:42.316 23189.662 - 23290.486: 98.7120% ( 3) 00:07:42.316 23290.486 - 23391.311: 98.7648% ( 5) 00:07:42.316 23391.311 - 23492.135: 98.8070% ( 4) 00:07:42.316 23492.135 - 23592.960: 98.8387% ( 3) 00:07:42.316 23592.960 - 23693.785: 98.8704% ( 3) 00:07:42.316 23693.785 - 23794.609: 98.9126% ( 4) 00:07:42.316 23794.609 - 23895.434: 98.9548% ( 4) 00:07:42.316 23895.434 - 23996.258: 98.9970% ( 4) 00:07:42.316 23996.258 - 24097.083: 99.0393% ( 4) 00:07:42.316 24097.083 - 24197.908: 99.0709% ( 3) 00:07:42.316 24197.908 - 24298.732: 99.1132% ( 4) 00:07:42.316 24298.732 - 24399.557: 99.1554% ( 4) 00:07:42.316 24399.557 - 24500.382: 99.1871% ( 3) 00:07:42.316 24500.382 - 24601.206: 99.2293% ( 4) 00:07:42.316 24601.206 - 24702.031: 99.2715% ( 4) 00:07:42.316 24702.031 - 24802.855: 99.3138% ( 4) 00:07:42.316 24802.855 - 24903.680: 99.3243% ( 1) 00:07:42.316 30045.735 - 30247.385: 99.3349% ( 1) 00:07:42.316 30247.385 - 30449.034: 99.3982% ( 6) 00:07:42.316 30449.034 - 30650.683: 99.4827% ( 8) 00:07:42.316 30650.683 - 30852.332: 99.5460% ( 6) 00:07:42.316 30852.332 - 31053.982: 99.6094% ( 6) 00:07:42.316 31053.982 - 31255.631: 99.6833% ( 7) 00:07:42.316 31255.631 - 31457.280: 99.7572% ( 7) 00:07:42.316 31457.280 - 31658.929: 99.8311% ( 7) 00:07:42.316 31658.929 - 31860.578: 99.9155% ( 8) 00:07:42.316 31860.578 - 32062.228: 99.9894% ( 7) 00:07:42.316 32062.228 - 32263.877: 100.0000% ( 1) 00:07:42.316 00:07:42.316 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:42.316 ============================================================================== 00:07:42.316 Range in us Cumulative IO count 00:07:42.316 8922.978 - 8973.391: 0.0106% ( 1) 00:07:42.316 9023.803 - 9074.215: 0.0211% ( 1) 00:07:42.316 9175.040 - 9225.452: 0.0633% ( 4) 00:07:42.316 9225.452 - 9275.865: 0.1161% ( 5) 00:07:42.316 9275.865 - 9326.277: 0.2006% ( 8) 00:07:42.316 9326.277 - 9376.689: 0.2745% ( 7) 00:07:42.316 9376.689 - 9427.102: 0.4117% ( 13) 00:07:42.316 9427.102 - 9477.514: 0.4751% ( 6) 00:07:42.316 9477.514 - 9527.926: 0.5490% ( 7) 00:07:42.316 9527.926 - 9578.338: 0.6123% ( 6) 00:07:42.316 9578.338 - 9628.751: 0.6757% ( 6) 00:07:42.316 9628.751 - 9679.163: 0.7390% ( 6) 00:07:42.316 9679.163 - 9729.575: 0.7918% ( 5) 00:07:42.316 9729.575 - 9779.988: 0.8763% ( 8) 00:07:42.316 9779.988 - 9830.400: 1.0030% ( 12) 00:07:42.316 9830.400 - 9880.812: 1.2563% ( 24) 00:07:42.317 9880.812 - 9931.225: 1.5308% ( 26) 00:07:42.317 9931.225 - 9981.637: 1.8264% ( 28) 00:07:42.317 9981.637 - 10032.049: 2.0481% ( 21) 00:07:42.317 10032.049 - 10082.462: 2.3015% ( 24) 00:07:42.317 10082.462 - 10132.874: 2.5866% ( 27) 00:07:42.317 10132.874 - 10183.286: 2.9561% ( 35) 00:07:42.317 10183.286 - 10233.698: 3.3045% ( 33) 00:07:42.317 10233.698 - 10284.111: 3.7796% ( 45) 00:07:42.317 10284.111 - 10334.523: 4.2019% ( 40) 00:07:42.317 10334.523 - 10384.935: 4.6453% ( 42) 00:07:42.317 10384.935 - 10435.348: 5.1837% ( 51) 00:07:42.317 10435.348 - 10485.760: 5.8171% ( 60) 00:07:42.317 10485.760 - 10536.172: 6.5773% ( 72) 00:07:42.317 10536.172 - 10586.585: 7.3163% ( 70) 00:07:42.317 10586.585 - 10636.997: 7.9814% ( 63) 00:07:42.317 10636.997 - 10687.409: 8.9316% ( 90) 00:07:42.317 10687.409 - 10737.822: 9.7551% ( 78) 00:07:42.317 10737.822 - 10788.234: 10.6841% ( 88) 00:07:42.317 10788.234 - 10838.646: 11.6871% ( 95) 00:07:42.317 10838.646 - 10889.058: 12.4367% ( 71) 00:07:42.317 10889.058 - 10939.471: 13.1018% ( 63) 00:07:42.317 10939.471 - 10989.883: 13.6719% ( 54) 00:07:42.317 10989.883 - 11040.295: 14.3264% ( 62) 00:07:42.317 11040.295 - 11090.708: 15.0338% ( 67) 00:07:42.317 11090.708 - 11141.120: 15.7306% ( 66) 00:07:42.317 11141.120 - 11191.532: 16.4802% ( 71) 00:07:42.317 11191.532 - 11241.945: 17.0714% ( 56) 00:07:42.317 11241.945 - 11292.357: 17.7259% ( 62) 00:07:42.317 11292.357 - 11342.769: 18.6972% ( 92) 00:07:42.317 11342.769 - 11393.182: 19.4046% ( 67) 00:07:42.317 11393.182 - 11443.594: 20.0486% ( 61) 00:07:42.317 11443.594 - 11494.006: 20.6609% ( 58) 00:07:42.317 11494.006 - 11544.418: 21.3366% ( 64) 00:07:42.317 11544.418 - 11594.831: 22.2445% ( 86) 00:07:42.317 11594.831 - 11645.243: 23.2475% ( 95) 00:07:42.317 11645.243 - 11695.655: 24.1132% ( 82) 00:07:42.317 11695.655 - 11746.068: 25.3695% ( 119) 00:07:42.317 11746.068 - 11796.480: 26.2880% ( 87) 00:07:42.317 11796.480 - 11846.892: 27.0587% ( 73) 00:07:42.317 11846.892 - 11897.305: 27.8822% ( 78) 00:07:42.317 11897.305 - 11947.717: 28.5051% ( 59) 00:07:42.317 11947.717 - 11998.129: 29.0435% ( 51) 00:07:42.317 11998.129 - 12048.542: 29.7192% ( 64) 00:07:42.317 12048.542 - 12098.954: 30.4054% ( 65) 00:07:42.317 12098.954 - 12149.366: 31.1339% ( 69) 00:07:42.317 12149.366 - 12199.778: 31.8307% ( 66) 00:07:42.317 12199.778 - 12250.191: 32.4535% ( 59) 00:07:42.317 12250.191 - 12300.603: 33.2665% ( 77) 00:07:42.317 12300.603 - 12351.015: 33.9210% ( 62) 00:07:42.317 12351.015 - 12401.428: 34.7445% ( 78) 00:07:42.317 12401.428 - 12451.840: 35.3885% ( 61) 00:07:42.317 12451.840 - 12502.252: 35.9164% ( 50) 00:07:42.317 12502.252 - 12552.665: 36.7715% ( 81) 00:07:42.317 12552.665 - 12603.077: 37.3628% ( 56) 00:07:42.317 12603.077 - 12653.489: 37.9540% ( 56) 00:07:42.317 12653.489 - 12703.902: 38.5874% ( 60) 00:07:42.317 12703.902 - 12754.314: 39.4637% ( 83) 00:07:42.317 12754.314 - 12804.726: 40.3399% ( 83) 00:07:42.317 12804.726 - 12855.138: 41.5329% ( 113) 00:07:42.317 12855.138 - 12905.551: 42.6098% ( 102) 00:07:42.317 12905.551 - 13006.375: 44.5312% ( 182) 00:07:42.317 13006.375 - 13107.200: 46.7905% ( 214) 00:07:42.317 13107.200 - 13208.025: 49.1026% ( 219) 00:07:42.317 13208.025 - 13308.849: 51.2035% ( 199) 00:07:42.317 13308.849 - 13409.674: 53.2095% ( 190) 00:07:42.317 13409.674 - 13510.498: 55.5004% ( 217) 00:07:42.317 13510.498 - 13611.323: 57.7914% ( 217) 00:07:42.317 13611.323 - 13712.148: 59.9029% ( 200) 00:07:42.317 13712.148 - 13812.972: 61.5182% ( 153) 00:07:42.317 13812.972 - 13913.797: 63.1546% ( 155) 00:07:42.317 13913.797 - 14014.622: 64.4320% ( 121) 00:07:42.317 14014.622 - 14115.446: 65.6672% ( 117) 00:07:42.317 14115.446 - 14216.271: 66.9447% ( 121) 00:07:42.317 14216.271 - 14317.095: 68.4544% ( 143) 00:07:42.317 14317.095 - 14417.920: 69.9008% ( 137) 00:07:42.317 14417.920 - 14518.745: 71.5160% ( 153) 00:07:42.317 14518.745 - 14619.569: 72.6562% ( 108) 00:07:42.317 14619.569 - 14720.394: 73.6803% ( 97) 00:07:42.317 14720.394 - 14821.218: 74.8205% ( 108) 00:07:42.317 14821.218 - 14922.043: 76.1191% ( 123) 00:07:42.317 14922.043 - 15022.868: 77.5232% ( 133) 00:07:42.317 15022.868 - 15123.692: 78.9168% ( 132) 00:07:42.317 15123.692 - 15224.517: 79.9831% ( 101) 00:07:42.317 15224.517 - 15325.342: 80.9755% ( 94) 00:07:42.317 15325.342 - 15426.166: 82.0840% ( 105) 00:07:42.317 15426.166 - 15526.991: 83.1609% ( 102) 00:07:42.317 15526.991 - 15627.815: 84.2589% ( 104) 00:07:42.317 15627.815 - 15728.640: 85.2829% ( 97) 00:07:42.317 15728.640 - 15829.465: 86.2965% ( 96) 00:07:42.317 15829.465 - 15930.289: 87.2572% ( 91) 00:07:42.317 15930.289 - 16031.114: 88.1862% ( 88) 00:07:42.317 16031.114 - 16131.938: 88.8725% ( 65) 00:07:42.317 16131.938 - 16232.763: 89.5270% ( 62) 00:07:42.317 16232.763 - 16333.588: 90.0127% ( 46) 00:07:42.317 16333.588 - 16434.412: 90.5828% ( 54) 00:07:42.317 16434.412 - 16535.237: 91.1845% ( 57) 00:07:42.317 16535.237 - 16636.062: 91.7652% ( 55) 00:07:42.317 16636.062 - 16736.886: 92.4092% ( 61) 00:07:42.317 16736.886 - 16837.711: 92.9160% ( 48) 00:07:42.317 16837.711 - 16938.535: 93.4966% ( 55) 00:07:42.317 16938.535 - 17039.360: 93.9823% ( 46) 00:07:42.317 17039.360 - 17140.185: 94.4046% ( 40) 00:07:42.317 17140.185 - 17241.009: 94.8057% ( 38) 00:07:42.317 17241.009 - 17341.834: 95.0486% ( 23) 00:07:42.317 17341.834 - 17442.658: 95.2492% ( 19) 00:07:42.317 17442.658 - 17543.483: 95.4497% ( 19) 00:07:42.317 17543.483 - 17644.308: 95.6081% ( 15) 00:07:42.317 17644.308 - 17745.132: 95.7348% ( 12) 00:07:42.317 17745.132 - 17845.957: 95.8404% ( 10) 00:07:42.317 17845.957 - 17946.782: 96.0621% ( 21) 00:07:42.317 17946.782 - 18047.606: 96.2627% ( 19) 00:07:42.317 18047.606 - 18148.431: 96.5266% ( 25) 00:07:42.317 18148.431 - 18249.255: 96.7694% ( 23) 00:07:42.317 18249.255 - 18350.080: 97.0017% ( 22) 00:07:42.317 18350.080 - 18450.905: 97.2656% ( 25) 00:07:42.317 18450.905 - 18551.729: 97.5190% ( 24) 00:07:42.317 18551.729 - 18652.554: 97.8252% ( 29) 00:07:42.317 18652.554 - 18753.378: 98.0574% ( 22) 00:07:42.317 18753.378 - 18854.203: 98.3425% ( 27) 00:07:42.317 18854.203 - 18955.028: 98.5008% ( 15) 00:07:42.317 18955.028 - 19055.852: 98.6064% ( 10) 00:07:42.317 19055.852 - 19156.677: 98.6486% ( 4) 00:07:42.317 21576.468 - 21677.292: 98.6592% ( 1) 00:07:42.317 21677.292 - 21778.117: 98.6909% ( 3) 00:07:42.317 21778.117 - 21878.942: 98.7331% ( 4) 00:07:42.317 21878.942 - 21979.766: 98.7753% ( 4) 00:07:42.317 21979.766 - 22080.591: 98.8176% ( 4) 00:07:42.317 22080.591 - 22181.415: 98.8598% ( 4) 00:07:42.317 22181.415 - 22282.240: 98.9020% ( 4) 00:07:42.317 22282.240 - 22383.065: 98.9443% ( 4) 00:07:42.317 22383.065 - 22483.889: 98.9865% ( 4) 00:07:42.317 22483.889 - 22584.714: 99.0393% ( 5) 00:07:42.317 22584.714 - 22685.538: 99.0815% ( 4) 00:07:42.317 22685.538 - 22786.363: 99.1237% ( 4) 00:07:42.317 22786.363 - 22887.188: 99.1660% ( 4) 00:07:42.317 22887.188 - 22988.012: 99.2082% ( 4) 00:07:42.317 22988.012 - 23088.837: 99.2504% ( 4) 00:07:42.317 23088.837 - 23189.662: 99.2927% ( 4) 00:07:42.317 23189.662 - 23290.486: 99.3243% ( 3) 00:07:42.317 28634.191 - 28835.840: 99.4088% ( 8) 00:07:42.317 28835.840 - 29037.489: 99.4932% ( 8) 00:07:42.317 29037.489 - 29239.138: 99.5777% ( 8) 00:07:42.317 29239.138 - 29440.788: 99.6622% ( 8) 00:07:42.317 29440.788 - 29642.437: 99.7572% ( 9) 00:07:42.317 29642.437 - 29844.086: 99.8416% ( 8) 00:07:42.317 29844.086 - 30045.735: 99.9261% ( 8) 00:07:42.317 30045.735 - 30247.385: 100.0000% ( 7) 00:07:42.317 00:07:42.317 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:42.317 ============================================================================== 00:07:42.317 Range in us Cumulative IO count 00:07:42.317 8822.154 - 8872.566: 0.0211% ( 2) 00:07:42.317 8872.566 - 8922.978: 0.0528% ( 3) 00:07:42.317 8922.978 - 8973.391: 0.0950% ( 4) 00:07:42.317 8973.391 - 9023.803: 0.1372% ( 4) 00:07:42.317 9023.803 - 9074.215: 0.1900% ( 5) 00:07:42.317 9074.215 - 9124.628: 0.2323% ( 4) 00:07:42.317 9124.628 - 9175.040: 0.4117% ( 17) 00:07:42.317 9175.040 - 9225.452: 0.4645% ( 5) 00:07:42.317 9225.452 - 9275.865: 0.4856% ( 2) 00:07:42.317 9275.865 - 9326.277: 0.5490% ( 6) 00:07:42.317 9326.277 - 9376.689: 0.6334% ( 8) 00:07:42.317 9376.689 - 9427.102: 0.7285% ( 9) 00:07:42.317 9427.102 - 9477.514: 0.8129% ( 8) 00:07:42.317 9477.514 - 9527.926: 0.9185% ( 10) 00:07:42.317 9527.926 - 9578.338: 1.1085% ( 18) 00:07:42.317 9578.338 - 9628.751: 1.3936% ( 27) 00:07:42.317 9628.751 - 9679.163: 1.7420% ( 33) 00:07:42.317 9679.163 - 9729.575: 2.0904% ( 33) 00:07:42.317 9729.575 - 9779.988: 2.4599% ( 35) 00:07:42.317 9779.988 - 9830.400: 2.7766% ( 30) 00:07:42.317 9830.400 - 9880.812: 2.9772% ( 19) 00:07:42.317 9880.812 - 9931.225: 3.3256% ( 33) 00:07:42.317 9931.225 - 9981.637: 3.6106% ( 27) 00:07:42.317 9981.637 - 10032.049: 3.7162% ( 10) 00:07:42.317 10032.049 - 10082.462: 3.8112% ( 9) 00:07:42.317 10082.462 - 10132.874: 3.9485% ( 13) 00:07:42.317 10132.874 - 10183.286: 4.1068% ( 15) 00:07:42.317 10183.286 - 10233.698: 4.2758% ( 16) 00:07:42.317 10233.698 - 10284.111: 4.4236% ( 14) 00:07:42.317 10284.111 - 10334.523: 4.5608% ( 13) 00:07:42.317 10334.523 - 10384.935: 4.7825% ( 21) 00:07:42.317 10384.935 - 10435.348: 5.0253% ( 23) 00:07:42.317 10435.348 - 10485.760: 5.5532% ( 50) 00:07:42.317 10485.760 - 10536.172: 6.0600% ( 48) 00:07:42.317 10536.172 - 10586.585: 6.6301% ( 54) 00:07:42.317 10586.585 - 10636.997: 7.2846% ( 62) 00:07:42.318 10636.997 - 10687.409: 8.2137% ( 88) 00:07:42.318 10687.409 - 10737.822: 9.0477% ( 79) 00:07:42.318 10737.822 - 10788.234: 9.9345% ( 84) 00:07:42.318 10788.234 - 10838.646: 10.8003% ( 82) 00:07:42.318 10838.646 - 10889.058: 11.6976% ( 85) 00:07:42.318 10889.058 - 10939.471: 12.4578% ( 72) 00:07:42.318 10939.471 - 10989.883: 13.3446% ( 84) 00:07:42.318 10989.883 - 11040.295: 14.1364% ( 75) 00:07:42.318 11040.295 - 11090.708: 15.0549% ( 87) 00:07:42.318 11090.708 - 11141.120: 15.7939% ( 70) 00:07:42.318 11141.120 - 11191.532: 16.6385% ( 80) 00:07:42.318 11191.532 - 11241.945: 17.4726% ( 79) 00:07:42.318 11241.945 - 11292.357: 18.0743% ( 57) 00:07:42.318 11292.357 - 11342.769: 18.6022% ( 50) 00:07:42.318 11342.769 - 11393.182: 19.0456% ( 42) 00:07:42.318 11393.182 - 11443.594: 19.3940% ( 33) 00:07:42.318 11443.594 - 11494.006: 19.8163% ( 40) 00:07:42.318 11494.006 - 11544.418: 20.3019% ( 46) 00:07:42.318 11544.418 - 11594.831: 20.8087% ( 48) 00:07:42.318 11594.831 - 11645.243: 21.5794% ( 73) 00:07:42.318 11645.243 - 11695.655: 22.3606% ( 74) 00:07:42.318 11695.655 - 11746.068: 22.9835% ( 59) 00:07:42.318 11746.068 - 11796.480: 23.7859% ( 76) 00:07:42.318 11796.480 - 11846.892: 24.4827% ( 66) 00:07:42.318 11846.892 - 11897.305: 25.3695% ( 84) 00:07:42.318 11897.305 - 11947.717: 26.3830% ( 96) 00:07:42.318 11947.717 - 11998.129: 27.2276% ( 80) 00:07:42.318 11998.129 - 12048.542: 28.1250% ( 85) 00:07:42.318 12048.542 - 12098.954: 28.9168% ( 75) 00:07:42.318 12098.954 - 12149.366: 29.7825% ( 82) 00:07:42.318 12149.366 - 12199.778: 30.5321% ( 71) 00:07:42.318 12199.778 - 12250.191: 31.3661% ( 79) 00:07:42.318 12250.191 - 12300.603: 32.3163% ( 90) 00:07:42.318 12300.603 - 12351.015: 33.2876% ( 92) 00:07:42.318 12351.015 - 12401.428: 34.4278% ( 108) 00:07:42.318 12401.428 - 12451.840: 35.6736% ( 118) 00:07:42.318 12451.840 - 12502.252: 36.7821% ( 105) 00:07:42.318 12502.252 - 12552.665: 37.7639% ( 93) 00:07:42.318 12552.665 - 12603.077: 38.7774% ( 96) 00:07:42.318 12603.077 - 12653.489: 40.0021% ( 116) 00:07:42.318 12653.489 - 12703.902: 40.7939% ( 75) 00:07:42.318 12703.902 - 12754.314: 41.8391% ( 99) 00:07:42.318 12754.314 - 12804.726: 42.8948% ( 100) 00:07:42.318 12804.726 - 12855.138: 44.0139% ( 106) 00:07:42.318 12855.138 - 12905.551: 44.9852% ( 92) 00:07:42.318 12905.551 - 13006.375: 46.8644% ( 178) 00:07:42.318 13006.375 - 13107.200: 48.8598% ( 189) 00:07:42.318 13107.200 - 13208.025: 50.6546% ( 170) 00:07:42.318 13208.025 - 13308.849: 52.4704% ( 172) 00:07:42.318 13308.849 - 13409.674: 54.6030% ( 202) 00:07:42.318 13409.674 - 13510.498: 56.0283% ( 135) 00:07:42.318 13510.498 - 13611.323: 57.5486% ( 144) 00:07:42.318 13611.323 - 13712.148: 59.0794% ( 145) 00:07:42.318 13712.148 - 13812.972: 60.5152% ( 136) 00:07:42.318 13812.972 - 13913.797: 62.0671% ( 147) 00:07:42.318 13913.797 - 14014.622: 63.8091% ( 165) 00:07:42.318 14014.622 - 14115.446: 65.5933% ( 169) 00:07:42.318 14115.446 - 14216.271: 66.8180% ( 116) 00:07:42.318 14216.271 - 14317.095: 68.0743% ( 119) 00:07:42.318 14317.095 - 14417.920: 69.4890% ( 134) 00:07:42.318 14417.920 - 14518.745: 70.7665% ( 121) 00:07:42.318 14518.745 - 14619.569: 72.1073% ( 127) 00:07:42.318 14619.569 - 14720.394: 73.5114% ( 133) 00:07:42.318 14720.394 - 14821.218: 74.7044% ( 113) 00:07:42.318 14821.218 - 14922.043: 75.8340% ( 107) 00:07:42.318 14922.043 - 15022.868: 77.0481% ( 115) 00:07:42.318 15022.868 - 15123.692: 78.3256% ( 121) 00:07:42.318 15123.692 - 15224.517: 79.5291% ( 114) 00:07:42.318 15224.517 - 15325.342: 80.6271% ( 104) 00:07:42.318 15325.342 - 15426.166: 81.7145% ( 103) 00:07:42.318 15426.166 - 15526.991: 82.7386% ( 97) 00:07:42.318 15526.991 - 15627.815: 83.9105% ( 111) 00:07:42.318 15627.815 - 15728.640: 84.9345% ( 97) 00:07:42.318 15728.640 - 15829.465: 85.9692% ( 98) 00:07:42.318 15829.465 - 15930.289: 86.9616% ( 94) 00:07:42.318 15930.289 - 16031.114: 87.9012% ( 89) 00:07:42.318 16031.114 - 16131.938: 88.5346% ( 60) 00:07:42.318 16131.938 - 16232.763: 89.2948% ( 72) 00:07:42.318 16232.763 - 16333.588: 89.9916% ( 66) 00:07:42.318 16333.588 - 16434.412: 90.6989% ( 67) 00:07:42.318 16434.412 - 16535.237: 91.3112% ( 58) 00:07:42.318 16535.237 - 16636.062: 91.8708% ( 53) 00:07:42.318 16636.062 - 16736.886: 92.4831% ( 58) 00:07:42.318 16736.886 - 16837.711: 93.0532% ( 54) 00:07:42.318 16837.711 - 16938.535: 93.4755% ( 40) 00:07:42.318 16938.535 - 17039.360: 93.8661% ( 37) 00:07:42.318 17039.360 - 17140.185: 94.3518% ( 46) 00:07:42.318 17140.185 - 17241.009: 94.8163% ( 44) 00:07:42.318 17241.009 - 17341.834: 95.2069% ( 37) 00:07:42.318 17341.834 - 17442.658: 95.5764% ( 35) 00:07:42.318 17442.658 - 17543.483: 95.7981% ( 21) 00:07:42.318 17543.483 - 17644.308: 96.0726% ( 26) 00:07:42.318 17644.308 - 17745.132: 96.3471% ( 26) 00:07:42.318 17745.132 - 17845.957: 96.6005% ( 24) 00:07:42.318 17845.957 - 17946.782: 96.8117% ( 20) 00:07:42.318 17946.782 - 18047.606: 96.9700% ( 15) 00:07:42.318 18047.606 - 18148.431: 97.0967% ( 12) 00:07:42.318 18148.431 - 18249.255: 97.2128% ( 11) 00:07:42.318 18249.255 - 18350.080: 97.3712% ( 15) 00:07:42.318 18350.080 - 18450.905: 97.5612% ( 18) 00:07:42.318 18450.905 - 18551.729: 97.7196% ( 15) 00:07:42.318 18551.729 - 18652.554: 97.8780% ( 15) 00:07:42.318 18652.554 - 18753.378: 98.0785% ( 19) 00:07:42.318 18753.378 - 18854.203: 98.1841% ( 10) 00:07:42.318 18854.203 - 18955.028: 98.2791% ( 9) 00:07:42.318 18955.028 - 19055.852: 98.3636% ( 8) 00:07:42.318 19055.852 - 19156.677: 98.4586% ( 9) 00:07:42.318 19156.677 - 19257.502: 98.5220% ( 6) 00:07:42.318 19257.502 - 19358.326: 98.5642% ( 4) 00:07:42.318 19358.326 - 19459.151: 98.6064% ( 4) 00:07:42.318 19459.151 - 19559.975: 98.6381% ( 3) 00:07:42.318 19559.975 - 19660.800: 98.6486% ( 1) 00:07:42.318 20769.871 - 20870.695: 98.6592% ( 1) 00:07:42.318 20870.695 - 20971.520: 98.6909% ( 3) 00:07:42.318 20971.520 - 21072.345: 98.7331% ( 4) 00:07:42.318 21072.345 - 21173.169: 98.7753% ( 4) 00:07:42.318 21173.169 - 21273.994: 98.8281% ( 5) 00:07:42.318 21273.994 - 21374.818: 98.8704% ( 4) 00:07:42.318 21374.818 - 21475.643: 98.9126% ( 4) 00:07:42.318 21475.643 - 21576.468: 98.9548% ( 4) 00:07:42.318 21576.468 - 21677.292: 98.9970% ( 4) 00:07:42.318 21677.292 - 21778.117: 99.0393% ( 4) 00:07:42.318 21778.117 - 21878.942: 99.0815% ( 4) 00:07:42.318 21878.942 - 21979.766: 99.1237% ( 4) 00:07:42.318 21979.766 - 22080.591: 99.1660% ( 4) 00:07:42.318 22080.591 - 22181.415: 99.2188% ( 5) 00:07:42.318 22181.415 - 22282.240: 99.2610% ( 4) 00:07:42.318 22282.240 - 22383.065: 99.3032% ( 4) 00:07:42.318 22383.065 - 22483.889: 99.3243% ( 2) 00:07:42.318 27424.295 - 27625.945: 99.3349% ( 1) 00:07:42.318 27625.945 - 27827.594: 99.4193% ( 8) 00:07:42.318 27827.594 - 28029.243: 99.5038% ( 8) 00:07:42.318 28029.243 - 28230.892: 99.5883% ( 8) 00:07:42.318 28230.892 - 28432.542: 99.6727% ( 8) 00:07:42.318 28432.542 - 28634.191: 99.7677% ( 9) 00:07:42.318 28634.191 - 28835.840: 99.8522% ( 8) 00:07:42.318 28835.840 - 29037.489: 99.9367% ( 8) 00:07:42.318 29037.489 - 29239.138: 100.0000% ( 6) 00:07:42.318 00:07:42.318 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:42.318 ============================================================================== 00:07:42.318 Range in us Cumulative IO count 00:07:42.318 9074.215 - 9124.628: 0.0105% ( 1) 00:07:42.318 9225.452 - 9275.865: 0.0210% ( 1) 00:07:42.318 9275.865 - 9326.277: 0.0524% ( 3) 00:07:42.318 9326.277 - 9376.689: 0.0734% ( 2) 00:07:42.318 9376.689 - 9427.102: 0.1154% ( 4) 00:07:42.318 9427.102 - 9477.514: 0.2412% ( 12) 00:07:42.318 9477.514 - 9527.926: 0.4299% ( 18) 00:07:42.318 9527.926 - 9578.338: 0.6607% ( 22) 00:07:42.318 9578.338 - 9628.751: 0.8494% ( 18) 00:07:42.318 9628.751 - 9679.163: 1.1221% ( 26) 00:07:42.318 9679.163 - 9729.575: 1.3528% ( 22) 00:07:42.318 9729.575 - 9779.988: 1.5835% ( 22) 00:07:42.318 9779.988 - 9830.400: 1.8561% ( 26) 00:07:42.318 9830.400 - 9880.812: 2.1497% ( 28) 00:07:42.318 9880.812 - 9931.225: 2.5168% ( 35) 00:07:42.318 9931.225 - 9981.637: 2.8628% ( 33) 00:07:42.318 9981.637 - 10032.049: 3.2089% ( 33) 00:07:42.319 10032.049 - 10082.462: 3.7437% ( 51) 00:07:42.319 10082.462 - 10132.874: 3.9954% ( 24) 00:07:42.319 10132.874 - 10183.286: 4.1841% ( 18) 00:07:42.319 10183.286 - 10233.698: 4.3729% ( 18) 00:07:42.319 10233.698 - 10284.111: 4.5931% ( 21) 00:07:42.319 10284.111 - 10334.523: 4.7609% ( 16) 00:07:42.319 10334.523 - 10384.935: 4.9602% ( 19) 00:07:42.319 10384.935 - 10435.348: 5.2433% ( 27) 00:07:42.319 10435.348 - 10485.760: 5.5055% ( 25) 00:07:42.319 10485.760 - 10536.172: 5.8410% ( 32) 00:07:42.319 10536.172 - 10586.585: 6.3758% ( 51) 00:07:42.319 10586.585 - 10636.997: 7.1099% ( 70) 00:07:42.319 10636.997 - 10687.409: 7.9488% ( 80) 00:07:42.319 10687.409 - 10737.822: 8.9136% ( 92) 00:07:42.319 10737.822 - 10788.234: 9.9832% ( 102) 00:07:42.319 10788.234 - 10838.646: 11.0214% ( 99) 00:07:42.319 10838.646 - 10889.058: 12.3951% ( 131) 00:07:42.319 10889.058 - 10939.471: 13.3180% ( 88) 00:07:42.319 10939.471 - 10989.883: 14.0625% ( 71) 00:07:42.319 10989.883 - 11040.295: 14.7022% ( 61) 00:07:42.319 11040.295 - 11090.708: 15.4887% ( 75) 00:07:42.319 11090.708 - 11141.120: 16.2122% ( 69) 00:07:42.319 11141.120 - 11191.532: 16.9987% ( 75) 00:07:42.319 11191.532 - 11241.945: 18.0998% ( 105) 00:07:42.319 11241.945 - 11292.357: 18.8549% ( 72) 00:07:42.319 11292.357 - 11342.769: 19.3582% ( 48) 00:07:42.319 11342.769 - 11393.182: 19.8091% ( 43) 00:07:42.319 11393.182 - 11443.594: 20.2706% ( 44) 00:07:42.319 11443.594 - 11494.006: 20.7949% ( 50) 00:07:42.319 11494.006 - 11544.418: 21.3297% ( 51) 00:07:42.319 11544.418 - 11594.831: 21.7596% ( 41) 00:07:42.319 11594.831 - 11645.243: 22.2315% ( 45) 00:07:42.319 11645.243 - 11695.655: 23.1229% ( 85) 00:07:42.319 11695.655 - 11746.068: 23.6158% ( 47) 00:07:42.319 11746.068 - 11796.480: 24.1820% ( 54) 00:07:42.319 11796.480 - 11846.892: 24.7169% ( 51) 00:07:42.319 11846.892 - 11897.305: 25.5243% ( 77) 00:07:42.319 11897.305 - 11947.717: 26.1116% ( 56) 00:07:42.319 11947.717 - 11998.129: 26.7722% ( 63) 00:07:42.319 11998.129 - 12048.542: 27.5273% ( 72) 00:07:42.319 12048.542 - 12098.954: 28.3767% ( 81) 00:07:42.319 12098.954 - 12149.366: 29.3310% ( 91) 00:07:42.319 12149.366 - 12199.778: 30.2013% ( 83) 00:07:42.319 12199.778 - 12250.191: 31.1766% ( 93) 00:07:42.319 12250.191 - 12300.603: 32.0575% ( 84) 00:07:42.319 12300.603 - 12351.015: 32.9069% ( 81) 00:07:42.319 12351.015 - 12401.428: 33.9031% ( 95) 00:07:42.319 12401.428 - 12451.840: 34.8679% ( 92) 00:07:42.319 12451.840 - 12502.252: 35.8641% ( 95) 00:07:42.319 12502.252 - 12552.665: 37.0176% ( 110) 00:07:42.319 12552.665 - 12603.077: 38.2341% ( 116) 00:07:42.319 12603.077 - 12653.489: 39.3247% ( 104) 00:07:42.319 12653.489 - 12703.902: 40.2789% ( 91) 00:07:42.319 12703.902 - 12754.314: 41.4115% ( 108) 00:07:42.319 12754.314 - 12804.726: 42.3448% ( 89) 00:07:42.319 12804.726 - 12855.138: 43.4354% ( 104) 00:07:42.319 12855.138 - 12905.551: 44.7357% ( 124) 00:07:42.319 12905.551 - 13006.375: 46.6443% ( 182) 00:07:42.319 13006.375 - 13107.200: 48.9304% ( 218) 00:07:42.319 13107.200 - 13208.025: 50.8389% ( 182) 00:07:42.319 13208.025 - 13308.849: 52.9362% ( 200) 00:07:42.319 13308.849 - 13409.674: 54.6036% ( 159) 00:07:42.319 13409.674 - 13510.498: 56.0088% ( 134) 00:07:42.319 13510.498 - 13611.323: 57.4874% ( 141) 00:07:42.319 13611.323 - 13712.148: 58.9975% ( 144) 00:07:42.319 13712.148 - 13812.972: 60.6648% ( 159) 00:07:42.319 13812.972 - 13913.797: 62.1854% ( 145) 00:07:42.319 13913.797 - 14014.622: 63.8108% ( 155) 00:07:42.319 14014.622 - 14115.446: 65.2475% ( 137) 00:07:42.319 14115.446 - 14216.271: 67.0302% ( 170) 00:07:42.319 14216.271 - 14317.095: 68.6032% ( 150) 00:07:42.319 14317.095 - 14417.920: 70.4908% ( 180) 00:07:42.319 14417.920 - 14518.745: 71.7701% ( 122) 00:07:42.319 14518.745 - 14619.569: 73.1334% ( 130) 00:07:42.319 14619.569 - 14720.394: 74.3393% ( 115) 00:07:42.319 14720.394 - 14821.218: 75.4614% ( 107) 00:07:42.319 14821.218 - 14922.043: 76.9086% ( 138) 00:07:42.319 14922.043 - 15022.868: 78.2404% ( 127) 00:07:42.319 15022.868 - 15123.692: 79.2995% ( 101) 00:07:42.319 15123.692 - 15224.517: 80.2223% ( 88) 00:07:42.319 15224.517 - 15325.342: 81.4597% ( 118) 00:07:42.319 15325.342 - 15426.166: 82.6028% ( 109) 00:07:42.319 15426.166 - 15526.991: 84.1338% ( 146) 00:07:42.319 15526.991 - 15627.815: 85.6544% ( 145) 00:07:42.319 15627.815 - 15728.640: 86.8813% ( 117) 00:07:42.319 15728.640 - 15829.465: 87.8565% ( 93) 00:07:42.319 15829.465 - 15930.289: 88.6955% ( 80) 00:07:42.319 15930.289 - 16031.114: 89.5763% ( 84) 00:07:42.319 16031.114 - 16131.938: 90.3628% ( 75) 00:07:42.319 16131.938 - 16232.763: 90.9291% ( 54) 00:07:42.319 16232.763 - 16333.588: 91.6107% ( 65) 00:07:42.319 16333.588 - 16434.412: 92.1456% ( 51) 00:07:42.319 16434.412 - 16535.237: 92.6489% ( 48) 00:07:42.319 16535.237 - 16636.062: 93.0684% ( 40) 00:07:42.319 16636.062 - 16736.886: 93.5298% ( 44) 00:07:42.319 16736.886 - 16837.711: 94.1904% ( 63) 00:07:42.319 16837.711 - 16938.535: 94.7253% ( 51) 00:07:42.319 16938.535 - 17039.360: 95.2076% ( 46) 00:07:42.319 17039.360 - 17140.185: 95.5956% ( 37) 00:07:42.319 17140.185 - 17241.009: 95.8788% ( 27) 00:07:42.319 17241.009 - 17341.834: 96.0570% ( 17) 00:07:42.319 17341.834 - 17442.658: 96.2878% ( 22) 00:07:42.319 17442.658 - 17543.483: 96.4555% ( 16) 00:07:42.319 17543.483 - 17644.308: 96.7177% ( 25) 00:07:42.319 17644.308 - 17745.132: 96.8331% ( 11) 00:07:42.319 17745.132 - 17845.957: 96.9904% ( 15) 00:07:42.319 17845.957 - 17946.782: 97.1686% ( 17) 00:07:42.319 17946.782 - 18047.606: 97.3679% ( 19) 00:07:42.319 18047.606 - 18148.431: 97.7559% ( 37) 00:07:42.319 18148.431 - 18249.255: 97.9341% ( 17) 00:07:42.319 18249.255 - 18350.080: 98.2068% ( 26) 00:07:42.319 18350.080 - 18450.905: 98.4060% ( 19) 00:07:42.319 18450.905 - 18551.729: 98.5738% ( 16) 00:07:42.319 18551.729 - 18652.554: 98.7731% ( 19) 00:07:42.319 18652.554 - 18753.378: 98.8570% ( 8) 00:07:42.319 18753.378 - 18854.203: 98.9409% ( 8) 00:07:42.319 18854.203 - 18955.028: 99.0352% ( 9) 00:07:42.319 18955.028 - 19055.852: 99.1191% ( 8) 00:07:42.319 19055.852 - 19156.677: 99.1716% ( 5) 00:07:42.319 19156.677 - 19257.502: 99.2135% ( 4) 00:07:42.319 19257.502 - 19358.326: 99.2555% ( 4) 00:07:42.319 19358.326 - 19459.151: 99.2974% ( 4) 00:07:42.319 19459.151 - 19559.975: 99.3289% ( 3) 00:07:42.319 20870.695 - 20971.520: 99.3603% ( 3) 00:07:42.319 20971.520 - 21072.345: 99.4023% ( 4) 00:07:42.319 21072.345 - 21173.169: 99.4442% ( 4) 00:07:42.319 21173.169 - 21273.994: 99.4862% ( 4) 00:07:42.319 21273.994 - 21374.818: 99.5281% ( 4) 00:07:42.319 21374.818 - 21475.643: 99.5701% ( 4) 00:07:42.319 21475.643 - 21576.468: 99.6225% ( 5) 00:07:42.319 21576.468 - 21677.292: 99.6644% ( 4) 00:07:42.319 21677.292 - 21778.117: 99.7064% ( 4) 00:07:42.319 21778.117 - 21878.942: 99.7483% ( 4) 00:07:42.319 21878.942 - 21979.766: 99.7903% ( 4) 00:07:42.319 21979.766 - 22080.591: 99.8322% ( 4) 00:07:42.319 22080.591 - 22181.415: 99.8742% ( 4) 00:07:42.319 22181.415 - 22282.240: 99.9266% ( 5) 00:07:42.319 22282.240 - 22383.065: 99.9581% ( 3) 00:07:42.319 22383.065 - 22483.889: 100.0000% ( 4) 00:07:42.319 00:07:42.319 ************************************ 00:07:42.319 END TEST nvme_perf 00:07:42.319 ************************************ 00:07:42.319 01:33:25 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:42.319 00:07:42.319 real 0m2.515s 00:07:42.319 user 0m2.218s 00:07:42.319 sys 0m0.194s 00:07:42.319 01:33:25 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.319 01:33:25 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:42.319 01:33:25 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:42.319 01:33:25 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:42.319 01:33:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.319 01:33:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.319 ************************************ 00:07:42.319 START TEST nvme_hello_world 00:07:42.319 ************************************ 00:07:42.319 01:33:25 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:42.319 Initializing NVMe Controllers 00:07:42.319 Attached to 0000:00:11.0 00:07:42.319 Namespace ID: 1 size: 5GB 00:07:42.319 Attached to 0000:00:13.0 00:07:42.319 Namespace ID: 1 size: 1GB 00:07:42.319 Attached to 0000:00:10.0 00:07:42.319 Namespace ID: 1 size: 6GB 00:07:42.319 Attached to 0000:00:12.0 00:07:42.319 Namespace ID: 1 size: 4GB 00:07:42.319 Namespace ID: 2 size: 4GB 00:07:42.319 Namespace ID: 3 size: 4GB 00:07:42.319 Initialization complete. 00:07:42.319 INFO: using host memory buffer for IO 00:07:42.319 Hello world! 00:07:42.319 INFO: using host memory buffer for IO 00:07:42.319 Hello world! 00:07:42.319 INFO: using host memory buffer for IO 00:07:42.319 Hello world! 00:07:42.319 INFO: using host memory buffer for IO 00:07:42.319 Hello world! 00:07:42.319 INFO: using host memory buffer for IO 00:07:42.319 Hello world! 00:07:42.319 INFO: using host memory buffer for IO 00:07:42.319 Hello world! 00:07:42.319 ************************************ 00:07:42.319 END TEST nvme_hello_world 00:07:42.319 ************************************ 00:07:42.319 00:07:42.319 real 0m0.236s 00:07:42.319 user 0m0.079s 00:07:42.319 sys 0m0.107s 00:07:42.319 01:33:26 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.319 01:33:26 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:42.646 01:33:26 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:42.646 01:33:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.646 01:33:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.646 01:33:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.646 ************************************ 00:07:42.646 START TEST nvme_sgl 00:07:42.646 ************************************ 00:07:42.646 01:33:26 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:42.646 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:42.646 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:42.646 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:42.646 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:42.646 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:42.646 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:42.646 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:42.646 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:42.646 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:42.646 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:42.646 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:42.646 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:42.646 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:42.646 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:42.646 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:42.646 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:42.646 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:42.646 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:42.646 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:42.646 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:42.646 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:42.646 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:42.646 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:42.646 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:42.646 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:42.646 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:42.646 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:42.646 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:42.646 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:42.646 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:42.646 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:42.646 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:42.646 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:42.646 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:42.646 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:42.646 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:42.924 NVMe Readv/Writev Request test 00:07:42.924 Attached to 0000:00:11.0 00:07:42.924 Attached to 0000:00:13.0 00:07:42.924 Attached to 0000:00:10.0 00:07:42.924 Attached to 0000:00:12.0 00:07:42.924 0000:00:11.0: build_io_request_2 test passed 00:07:42.924 0000:00:11.0: build_io_request_4 test passed 00:07:42.924 0000:00:11.0: build_io_request_5 test passed 00:07:42.924 0000:00:11.0: build_io_request_6 test passed 00:07:42.924 0000:00:11.0: build_io_request_7 test passed 00:07:42.924 0000:00:11.0: build_io_request_10 test passed 00:07:42.924 0000:00:10.0: build_io_request_2 test passed 00:07:42.924 0000:00:10.0: build_io_request_4 test passed 00:07:42.924 0000:00:10.0: build_io_request_5 test passed 00:07:42.924 0000:00:10.0: build_io_request_6 test passed 00:07:42.924 0000:00:10.0: build_io_request_7 test passed 00:07:42.924 0000:00:10.0: build_io_request_10 test passed 00:07:42.924 Cleaning up... 00:07:42.924 ************************************ 00:07:42.924 END TEST nvme_sgl 00:07:42.924 ************************************ 00:07:42.924 00:07:42.924 real 0m0.293s 00:07:42.924 user 0m0.151s 00:07:42.924 sys 0m0.099s 00:07:42.924 01:33:26 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.924 01:33:26 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:42.924 01:33:26 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:42.924 01:33:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.924 01:33:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.924 01:33:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.924 ************************************ 00:07:42.924 START TEST nvme_e2edp 00:07:42.924 ************************************ 00:07:42.924 01:33:26 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:42.924 NVMe Write/Read with End-to-End data protection test 00:07:42.924 Attached to 0000:00:11.0 00:07:42.924 Attached to 0000:00:13.0 00:07:42.924 Attached to 0000:00:10.0 00:07:42.924 Attached to 0000:00:12.0 00:07:42.924 Cleaning up... 00:07:42.924 ************************************ 00:07:42.924 END TEST nvme_e2edp 00:07:42.924 ************************************ 00:07:42.924 00:07:42.924 real 0m0.211s 00:07:42.924 user 0m0.074s 00:07:42.924 sys 0m0.092s 00:07:42.924 01:33:26 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.924 01:33:26 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:42.924 01:33:26 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:42.924 01:33:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.924 01:33:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.924 01:33:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.183 ************************************ 00:07:43.183 START TEST nvme_reserve 00:07:43.183 ************************************ 00:07:43.183 01:33:26 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:43.183 ===================================================== 00:07:43.183 NVMe Controller at PCI bus 0, device 17, function 0 00:07:43.183 ===================================================== 00:07:43.183 Reservations: Not Supported 00:07:43.183 ===================================================== 00:07:43.183 NVMe Controller at PCI bus 0, device 19, function 0 00:07:43.183 ===================================================== 00:07:43.183 Reservations: Not Supported 00:07:43.183 ===================================================== 00:07:43.183 NVMe Controller at PCI bus 0, device 16, function 0 00:07:43.183 ===================================================== 00:07:43.183 Reservations: Not Supported 00:07:43.183 ===================================================== 00:07:43.183 NVMe Controller at PCI bus 0, device 18, function 0 00:07:43.183 ===================================================== 00:07:43.183 Reservations: Not Supported 00:07:43.183 Reservation test passed 00:07:43.183 ************************************ 00:07:43.183 END TEST nvme_reserve 00:07:43.183 ************************************ 00:07:43.183 00:07:43.183 real 0m0.192s 00:07:43.183 user 0m0.073s 00:07:43.183 sys 0m0.086s 00:07:43.183 01:33:27 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.183 01:33:27 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:43.183 01:33:27 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:43.183 01:33:27 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.183 01:33:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.183 01:33:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.183 ************************************ 00:07:43.183 START TEST nvme_err_injection 00:07:43.183 ************************************ 00:07:43.183 01:33:27 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:43.442 NVMe Error Injection test 00:07:43.442 Attached to 0000:00:11.0 00:07:43.442 Attached to 0000:00:13.0 00:07:43.442 Attached to 0000:00:10.0 00:07:43.442 Attached to 0000:00:12.0 00:07:43.442 0000:00:13.0: get features failed as expected 00:07:43.442 0000:00:10.0: get features failed as expected 00:07:43.442 0000:00:12.0: get features failed as expected 00:07:43.442 0000:00:11.0: get features failed as expected 00:07:43.442 0000:00:11.0: get features successfully as expected 00:07:43.442 0000:00:13.0: get features successfully as expected 00:07:43.442 0000:00:10.0: get features successfully as expected 00:07:43.442 0000:00:12.0: get features successfully as expected 00:07:43.442 0000:00:12.0: read failed as expected 00:07:43.442 0000:00:11.0: read failed as expected 00:07:43.442 0000:00:13.0: read failed as expected 00:07:43.442 0000:00:10.0: read failed as expected 00:07:43.442 0000:00:12.0: read successfully as expected 00:07:43.442 0000:00:11.0: read successfully as expected 00:07:43.442 0000:00:13.0: read successfully as expected 00:07:43.442 0000:00:10.0: read successfully as expected 00:07:43.442 Cleaning up... 00:07:43.442 ************************************ 00:07:43.442 END TEST nvme_err_injection 00:07:43.442 ************************************ 00:07:43.442 00:07:43.442 real 0m0.224s 00:07:43.442 user 0m0.086s 00:07:43.442 sys 0m0.096s 00:07:43.442 01:33:27 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.442 01:33:27 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:43.700 01:33:27 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:43.700 01:33:27 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:43.700 01:33:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.700 01:33:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.700 ************************************ 00:07:43.700 START TEST nvme_overhead 00:07:43.700 ************************************ 00:07:43.700 01:33:27 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:44.637 Initializing NVMe Controllers 00:07:44.637 Attached to 0000:00:11.0 00:07:44.637 Attached to 0000:00:13.0 00:07:44.637 Attached to 0000:00:10.0 00:07:44.637 Attached to 0000:00:12.0 00:07:44.637 Initialization complete. Launching workers. 00:07:44.637 submit (in ns) avg, min, max = 12443.6, 9803.1, 244355.4 00:07:44.637 complete (in ns) avg, min, max = 7972.4, 7325.4, 203380.8 00:07:44.637 00:07:44.637 Submit histogram 00:07:44.637 ================ 00:07:44.637 Range in us Cumulative Count 00:07:44.637 9.797 - 9.846: 0.0476% ( 2) 00:07:44.637 10.338 - 10.388: 0.0714% ( 1) 00:07:44.637 10.437 - 10.486: 0.0951% ( 1) 00:07:44.637 10.634 - 10.683: 0.1427% ( 2) 00:07:44.637 10.782 - 10.831: 0.1665% ( 1) 00:07:44.637 10.831 - 10.880: 0.1903% ( 1) 00:07:44.637 10.880 - 10.929: 0.2854% ( 4) 00:07:44.637 10.929 - 10.978: 0.6422% ( 15) 00:07:44.637 10.978 - 11.028: 1.5699% ( 39) 00:07:44.637 11.028 - 11.077: 3.4491% ( 79) 00:07:44.637 11.077 - 11.126: 6.3035% ( 120) 00:07:44.637 11.126 - 11.175: 11.2988% ( 210) 00:07:44.637 11.175 - 11.225: 17.6261% ( 266) 00:07:44.637 11.225 - 11.274: 25.3806% ( 326) 00:07:44.637 11.274 - 11.323: 34.0628% ( 365) 00:07:44.637 11.323 - 11.372: 44.3149% ( 431) 00:07:44.637 11.372 - 11.422: 54.2578% ( 418) 00:07:44.637 11.422 - 11.471: 63.1541% ( 374) 00:07:44.637 11.471 - 11.520: 69.4101% ( 263) 00:07:44.637 11.520 - 11.569: 73.2398% ( 161) 00:07:44.637 11.569 - 11.618: 75.7374% ( 105) 00:07:44.637 11.618 - 11.668: 77.4025% ( 70) 00:07:44.637 11.668 - 11.717: 78.3539% ( 40) 00:07:44.637 11.717 - 11.766: 79.0676% ( 30) 00:07:44.637 11.766 - 11.815: 79.8287% ( 32) 00:07:44.637 11.815 - 11.865: 80.5186% ( 29) 00:07:44.637 11.865 - 11.914: 81.1370% ( 26) 00:07:44.637 11.914 - 11.963: 81.8268% ( 29) 00:07:44.637 11.963 - 12.012: 82.4453% ( 26) 00:07:44.637 12.012 - 12.062: 82.8259% ( 16) 00:07:44.637 12.062 - 12.111: 83.1351% ( 13) 00:07:44.637 12.111 - 12.160: 83.3730% ( 10) 00:07:44.637 12.160 - 12.209: 83.8011% ( 18) 00:07:44.637 12.209 - 12.258: 83.9914% ( 8) 00:07:44.637 12.258 - 12.308: 84.2055% ( 9) 00:07:44.637 12.308 - 12.357: 84.3482% ( 6) 00:07:44.637 12.357 - 12.406: 84.4196% ( 3) 00:07:44.637 12.406 - 12.455: 84.5623% ( 6) 00:07:44.637 12.455 - 12.505: 84.6575% ( 4) 00:07:44.637 12.505 - 12.554: 84.7526% ( 4) 00:07:44.637 12.554 - 12.603: 84.8002% ( 2) 00:07:44.637 12.603 - 12.702: 84.8478% ( 2) 00:07:44.637 12.702 - 12.800: 84.9667% ( 5) 00:07:44.637 12.800 - 12.898: 85.0381% ( 3) 00:07:44.637 12.898 - 12.997: 85.0618% ( 1) 00:07:44.637 12.997 - 13.095: 85.0856% ( 1) 00:07:44.637 13.095 - 13.194: 85.1332% ( 2) 00:07:44.637 13.194 - 13.292: 85.2046% ( 3) 00:07:44.637 13.292 - 13.391: 85.3473% ( 6) 00:07:44.637 13.391 - 13.489: 85.6089% ( 11) 00:07:44.637 13.489 - 13.588: 85.8706% ( 11) 00:07:44.637 13.588 - 13.686: 85.9895% ( 5) 00:07:44.637 13.686 - 13.785: 86.1798% ( 8) 00:07:44.637 13.785 - 13.883: 86.2750% ( 4) 00:07:44.637 13.883 - 13.982: 86.3939% ( 5) 00:07:44.637 13.982 - 14.080: 86.5604% ( 7) 00:07:44.637 14.080 - 14.178: 86.6318% ( 3) 00:07:44.637 14.178 - 14.277: 86.8221% ( 8) 00:07:44.637 14.277 - 14.375: 86.9648% ( 6) 00:07:44.637 14.375 - 14.474: 87.0837% ( 5) 00:07:44.637 14.474 - 14.572: 87.1789% ( 4) 00:07:44.637 14.572 - 14.671: 87.2740% ( 4) 00:07:44.637 14.671 - 14.769: 87.3930% ( 5) 00:07:44.637 14.769 - 14.868: 87.4881% ( 4) 00:07:44.637 14.868 - 14.966: 87.6308% ( 6) 00:07:44.637 14.966 - 15.065: 87.8211% ( 8) 00:07:44.637 15.065 - 15.163: 87.9638% ( 6) 00:07:44.637 15.163 - 15.262: 88.1541% ( 8) 00:07:44.637 15.262 - 15.360: 88.3682% ( 9) 00:07:44.637 15.360 - 15.458: 88.5109% ( 6) 00:07:44.637 15.458 - 15.557: 88.6775% ( 7) 00:07:44.637 15.557 - 15.655: 88.8202% ( 6) 00:07:44.637 15.655 - 15.754: 89.0343% ( 9) 00:07:44.637 15.754 - 15.852: 89.3435% ( 13) 00:07:44.637 15.852 - 15.951: 89.5338% ( 8) 00:07:44.637 15.951 - 16.049: 89.7479% ( 9) 00:07:44.637 16.049 - 16.148: 90.0095% ( 11) 00:07:44.637 16.148 - 16.246: 90.3901% ( 16) 00:07:44.637 16.246 - 16.345: 90.6042% ( 9) 00:07:44.897 16.345 - 16.443: 90.7707% ( 7) 00:07:44.897 16.443 - 16.542: 91.0086% ( 10) 00:07:44.897 16.542 - 16.640: 91.1989% ( 8) 00:07:44.897 16.640 - 16.738: 91.4843% ( 12) 00:07:44.897 16.738 - 16.837: 91.6746% ( 8) 00:07:44.897 16.837 - 16.935: 91.7935% ( 5) 00:07:44.897 16.935 - 17.034: 92.0076% ( 9) 00:07:44.897 17.034 - 17.132: 92.1979% ( 8) 00:07:44.897 17.132 - 17.231: 92.3882% ( 8) 00:07:44.897 17.231 - 17.329: 92.5785% ( 8) 00:07:44.897 17.329 - 17.428: 92.7688% ( 8) 00:07:44.897 17.428 - 17.526: 93.0780% ( 13) 00:07:44.897 17.526 - 17.625: 93.3635% ( 12) 00:07:44.897 17.625 - 17.723: 93.6013% ( 10) 00:07:44.897 17.723 - 17.822: 93.8868% ( 12) 00:07:44.897 17.822 - 17.920: 94.1960% ( 13) 00:07:44.897 17.920 - 18.018: 94.3863% ( 8) 00:07:44.897 18.018 - 18.117: 94.5528% ( 7) 00:07:44.897 18.117 - 18.215: 94.6717% ( 5) 00:07:44.897 18.215 - 18.314: 94.9096% ( 10) 00:07:44.897 18.314 - 18.412: 95.1951% ( 12) 00:07:44.897 18.412 - 18.511: 95.4091% ( 9) 00:07:44.897 18.511 - 18.609: 95.6946% ( 12) 00:07:44.897 18.609 - 18.708: 96.0038% ( 13) 00:07:44.897 18.708 - 18.806: 96.3844% ( 16) 00:07:44.897 18.806 - 18.905: 96.8601% ( 20) 00:07:44.897 18.905 - 19.003: 97.2169% ( 15) 00:07:44.897 19.003 - 19.102: 97.4310% ( 9) 00:07:44.897 19.102 - 19.200: 97.6927% ( 11) 00:07:44.897 19.200 - 19.298: 97.8830% ( 8) 00:07:44.897 19.298 - 19.397: 98.0019% ( 5) 00:07:44.897 19.397 - 19.495: 98.0733% ( 3) 00:07:44.897 19.495 - 19.594: 98.1684% ( 4) 00:07:44.897 19.594 - 19.692: 98.2636% ( 4) 00:07:44.897 19.692 - 19.791: 98.3587% ( 4) 00:07:44.897 19.791 - 19.889: 98.4539% ( 4) 00:07:44.897 19.889 - 19.988: 98.5966% ( 6) 00:07:44.897 19.988 - 20.086: 98.7393% ( 6) 00:07:44.897 20.086 - 20.185: 98.8107% ( 3) 00:07:44.897 20.185 - 20.283: 98.8344% ( 1) 00:07:44.897 20.480 - 20.578: 98.9534% ( 5) 00:07:44.897 20.578 - 20.677: 98.9772% ( 1) 00:07:44.897 20.677 - 20.775: 99.0723% ( 4) 00:07:44.897 20.775 - 20.874: 99.1437% ( 3) 00:07:44.897 20.874 - 20.972: 99.1675% ( 1) 00:07:44.897 20.972 - 21.071: 99.1912% ( 1) 00:07:44.897 21.071 - 21.169: 99.2150% ( 1) 00:07:44.897 21.169 - 21.268: 99.2388% ( 1) 00:07:44.897 21.268 - 21.366: 99.2626% ( 1) 00:07:44.897 21.366 - 21.465: 99.2864% ( 1) 00:07:44.897 21.465 - 21.563: 99.3102% ( 1) 00:07:44.897 21.563 - 21.662: 99.3340% ( 1) 00:07:44.897 21.760 - 21.858: 99.3578% ( 1) 00:07:44.897 21.957 - 22.055: 99.3815% ( 1) 00:07:44.897 22.154 - 22.252: 99.4291% ( 2) 00:07:44.897 22.252 - 22.351: 99.4529% ( 1) 00:07:44.897 22.449 - 22.548: 99.4767% ( 1) 00:07:44.897 22.942 - 23.040: 99.5005% ( 1) 00:07:44.897 23.040 - 23.138: 99.5243% ( 1) 00:07:44.897 23.138 - 23.237: 99.5480% ( 1) 00:07:44.897 23.828 - 23.926: 99.5956% ( 2) 00:07:44.897 24.418 - 24.517: 99.6194% ( 1) 00:07:44.897 26.388 - 26.585: 99.6432% ( 1) 00:07:44.897 27.175 - 27.372: 99.6670% ( 1) 00:07:44.897 29.342 - 29.538: 99.6908% ( 1) 00:07:44.897 31.508 - 31.705: 99.7146% ( 1) 00:07:44.897 32.098 - 32.295: 99.7383% ( 1) 00:07:44.897 36.234 - 36.431: 99.7621% ( 1) 00:07:44.897 39.582 - 39.778: 99.7859% ( 1) 00:07:44.897 40.369 - 40.566: 99.8097% ( 1) 00:07:44.897 43.717 - 43.914: 99.8335% ( 1) 00:07:44.897 61.440 - 61.834: 99.8811% ( 2) 00:07:44.897 67.742 - 68.135: 99.9049% ( 1) 00:07:44.897 94.523 - 94.917: 99.9286% ( 1) 00:07:44.897 99.249 - 99.643: 99.9524% ( 1) 00:07:44.897 161.477 - 162.265: 99.9762% ( 1) 00:07:44.897 244.185 - 245.760: 100.0000% ( 1) 00:07:44.897 00:07:44.897 Complete histogram 00:07:44.897 ================== 00:07:44.897 Range in us Cumulative Count 00:07:44.897 7.286 - 7.335: 0.0238% ( 1) 00:07:44.897 7.335 - 7.385: 1.7840% ( 74) 00:07:44.897 7.385 - 7.434: 9.2769% ( 315) 00:07:44.897 7.434 - 7.483: 20.1475% ( 457) 00:07:44.897 7.483 - 7.532: 33.1827% ( 548) 00:07:44.897 7.532 - 7.582: 44.8382% ( 490) 00:07:44.897 7.582 - 7.631: 52.5214% ( 323) 00:07:44.897 7.631 - 7.680: 57.4215% ( 206) 00:07:44.897 7.680 - 7.729: 60.4186% ( 126) 00:07:44.897 7.729 - 7.778: 62.3930% ( 83) 00:07:44.897 7.778 - 7.828: 64.0105% ( 68) 00:07:44.897 7.828 - 7.877: 64.7954% ( 33) 00:07:44.897 7.877 - 7.926: 67.0552% ( 95) 00:07:44.897 7.926 - 7.975: 72.1931% ( 216) 00:07:44.897 7.975 - 8.025: 76.1180% ( 165) 00:07:44.897 8.025 - 8.074: 79.6384% ( 148) 00:07:44.897 8.074 - 8.123: 83.9201% ( 180) 00:07:44.897 8.123 - 8.172: 87.6070% ( 155) 00:07:44.897 8.172 - 8.222: 90.7945% ( 134) 00:07:44.897 8.222 - 8.271: 93.2921% ( 105) 00:07:44.897 8.271 - 8.320: 94.3863% ( 46) 00:07:44.897 8.320 - 8.369: 95.5756% ( 50) 00:07:44.897 8.369 - 8.418: 96.4082% ( 35) 00:07:44.897 8.418 - 8.468: 96.6936% ( 12) 00:07:44.897 8.468 - 8.517: 96.9791% ( 12) 00:07:44.897 8.517 - 8.566: 97.2407% ( 11) 00:07:44.897 8.566 - 8.615: 97.4786% ( 10) 00:07:44.897 8.615 - 8.665: 97.5975% ( 5) 00:07:44.897 8.665 - 8.714: 97.6689% ( 3) 00:07:44.897 8.714 - 8.763: 97.6927% ( 1) 00:07:44.897 8.763 - 8.812: 97.7165% ( 1) 00:07:44.897 8.911 - 8.960: 97.7402% ( 1) 00:07:44.897 9.255 - 9.305: 97.7640% ( 1) 00:07:44.897 9.403 - 9.452: 97.7878% ( 1) 00:07:44.897 9.452 - 9.502: 97.8116% ( 1) 00:07:44.897 9.698 - 9.748: 97.8354% ( 1) 00:07:44.897 9.797 - 9.846: 97.8592% ( 1) 00:07:44.897 10.092 - 10.142: 97.8830% ( 1) 00:07:44.897 10.191 - 10.240: 97.9068% ( 1) 00:07:44.897 10.240 - 10.289: 97.9305% ( 1) 00:07:44.897 10.289 - 10.338: 97.9543% ( 1) 00:07:44.897 10.535 - 10.585: 97.9781% ( 1) 00:07:44.897 10.929 - 10.978: 98.0019% ( 1) 00:07:44.897 11.323 - 11.372: 98.0257% ( 1) 00:07:44.897 11.618 - 11.668: 98.0495% ( 1) 00:07:44.897 12.308 - 12.357: 98.0733% ( 1) 00:07:44.897 12.603 - 12.702: 98.0971% ( 1) 00:07:44.897 12.800 - 12.898: 98.1208% ( 1) 00:07:44.897 12.997 - 13.095: 98.2160% ( 4) 00:07:44.897 13.095 - 13.194: 98.3825% ( 7) 00:07:44.897 13.194 - 13.292: 98.5490% ( 7) 00:07:44.897 13.292 - 13.391: 98.6441% ( 4) 00:07:44.897 13.391 - 13.489: 98.6917% ( 2) 00:07:44.897 13.489 - 13.588: 98.7869% ( 4) 00:07:44.897 13.588 - 13.686: 98.8820% ( 4) 00:07:44.897 13.686 - 13.785: 98.9772% ( 4) 00:07:44.897 13.785 - 13.883: 99.0247% ( 2) 00:07:44.897 13.883 - 13.982: 99.0961% ( 3) 00:07:44.897 13.982 - 14.080: 99.1437% ( 2) 00:07:44.897 14.080 - 14.178: 99.1912% ( 2) 00:07:44.897 14.178 - 14.277: 99.2864% ( 4) 00:07:44.897 14.277 - 14.375: 99.3340% ( 2) 00:07:44.897 14.375 - 14.474: 99.4053% ( 3) 00:07:44.897 14.572 - 14.671: 99.4291% ( 1) 00:07:44.897 14.671 - 14.769: 99.4767% ( 2) 00:07:44.897 15.360 - 15.458: 99.5005% ( 1) 00:07:44.897 16.345 - 16.443: 99.5243% ( 1) 00:07:44.897 17.625 - 17.723: 99.5480% ( 1) 00:07:44.897 17.723 - 17.822: 99.5718% ( 1) 00:07:44.897 18.215 - 18.314: 99.6194% ( 2) 00:07:44.897 18.314 - 18.412: 99.6670% ( 2) 00:07:44.897 18.609 - 18.708: 99.6908% ( 1) 00:07:44.897 18.708 - 18.806: 99.7146% ( 1) 00:07:44.897 19.102 - 19.200: 99.7383% ( 1) 00:07:44.897 20.480 - 20.578: 99.7621% ( 1) 00:07:44.897 21.071 - 21.169: 99.7859% ( 1) 00:07:44.897 21.268 - 21.366: 99.8097% ( 1) 00:07:44.897 30.720 - 30.917: 99.8335% ( 1) 00:07:44.897 37.218 - 37.415: 99.8573% ( 1) 00:07:44.897 39.975 - 40.172: 99.8811% ( 1) 00:07:44.897 43.717 - 43.914: 99.9049% ( 1) 00:07:44.897 52.382 - 52.775: 99.9286% ( 1) 00:07:44.897 53.169 - 53.563: 99.9524% ( 1) 00:07:44.897 59.471 - 59.865: 99.9762% ( 1) 00:07:44.897 203.225 - 204.800: 100.0000% ( 1) 00:07:44.897 00:07:44.897 ************************************ 00:07:44.897 END TEST nvme_overhead 00:07:44.897 ************************************ 00:07:44.897 00:07:44.897 real 0m1.214s 00:07:44.897 user 0m1.068s 00:07:44.897 sys 0m0.097s 00:07:44.897 01:33:28 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.897 01:33:28 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:44.897 01:33:28 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:44.897 01:33:28 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:44.897 01:33:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.897 01:33:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:44.897 ************************************ 00:07:44.897 START TEST nvme_arbitration 00:07:44.897 ************************************ 00:07:44.898 01:33:28 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:48.181 Initializing NVMe Controllers 00:07:48.181 Attached to 0000:00:11.0 00:07:48.181 Attached to 0000:00:13.0 00:07:48.181 Attached to 0000:00:10.0 00:07:48.181 Attached to 0000:00:12.0 00:07:48.181 Associating QEMU NVMe Ctrl (12341 ) with lcore 0 00:07:48.181 Associating QEMU NVMe Ctrl (12343 ) with lcore 1 00:07:48.181 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:07:48.181 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:48.181 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:48.181 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:48.181 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:48.181 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:48.181 Initialization complete. Launching workers. 00:07:48.181 Starting thread on core 1 with urgent priority queue 00:07:48.181 Starting thread on core 2 with urgent priority queue 00:07:48.181 Starting thread on core 3 with urgent priority queue 00:07:48.181 Starting thread on core 0 with urgent priority queue 00:07:48.181 QEMU NVMe Ctrl (12341 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:07:48.181 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:07:48.181 QEMU NVMe Ctrl (12343 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:07:48.181 QEMU NVMe Ctrl (12342 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:07:48.181 QEMU NVMe Ctrl (12340 ) core 2: 853.33 IO/s 117.19 secs/100000 ios 00:07:48.182 QEMU NVMe Ctrl (12342 ) core 3: 853.33 IO/s 117.19 secs/100000 ios 00:07:48.182 ======================================================== 00:07:48.182 00:07:48.182 ************************************ 00:07:48.182 END TEST nvme_arbitration 00:07:48.182 ************************************ 00:07:48.182 00:07:48.182 real 0m3.335s 00:07:48.182 user 0m9.288s 00:07:48.182 sys 0m0.116s 00:07:48.182 01:33:32 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.182 01:33:32 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:48.182 01:33:32 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:48.182 01:33:32 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:48.182 01:33:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.182 01:33:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:48.182 ************************************ 00:07:48.182 START TEST nvme_single_aen 00:07:48.182 ************************************ 00:07:48.182 01:33:32 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:48.442 Asynchronous Event Request test 00:07:48.442 Attached to 0000:00:11.0 00:07:48.442 Attached to 0000:00:13.0 00:07:48.442 Attached to 0000:00:10.0 00:07:48.442 Attached to 0000:00:12.0 00:07:48.442 Reset controller to setup AER completions for this process 00:07:48.442 Registering asynchronous event callbacks... 00:07:48.442 Getting orig temperature thresholds of all controllers 00:07:48.442 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:48.442 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:48.442 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:48.442 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:48.442 Setting all controllers temperature threshold low to trigger AER 00:07:48.442 Waiting for all controllers temperature threshold to be set lower 00:07:48.442 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:48.442 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:48.442 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:48.442 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:48.442 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:48.442 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:48.442 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:48.442 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:48.442 Waiting for all controllers to trigger AER and reset threshold 00:07:48.442 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.442 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.442 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.442 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.442 Cleaning up... 00:07:48.442 ************************************ 00:07:48.442 END TEST nvme_single_aen 00:07:48.442 ************************************ 00:07:48.442 00:07:48.442 real 0m0.238s 00:07:48.442 user 0m0.098s 00:07:48.442 sys 0m0.089s 00:07:48.442 01:33:32 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.442 01:33:32 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:48.442 01:33:32 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:48.442 01:33:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.442 01:33:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.442 01:33:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:48.442 ************************************ 00:07:48.442 START TEST nvme_doorbell_aers 00:07:48.442 ************************************ 00:07:48.442 01:33:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:48.442 01:33:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:48.442 01:33:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:48.442 01:33:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:48.442 01:33:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:48.442 01:33:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:48.442 01:33:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:48.442 01:33:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:48.442 01:33:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:48.442 01:33:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:48.703 01:33:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:48.703 01:33:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:48.703 01:33:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:48.703 01:33:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:48.703 [2024-11-21 01:33:32.626309] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:07:58.683 Executing: test_write_invalid_db 00:07:58.683 Waiting for AER completion... 00:07:58.683 Failure: test_write_invalid_db 00:07:58.683 00:07:58.683 Executing: test_invalid_db_write_overflow_sq 00:07:58.683 Waiting for AER completion... 00:07:58.683 Failure: test_invalid_db_write_overflow_sq 00:07:58.683 00:07:58.683 Executing: test_invalid_db_write_overflow_cq 00:07:58.683 Waiting for AER completion... 00:07:58.683 Failure: test_invalid_db_write_overflow_cq 00:07:58.683 00:07:58.683 01:33:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:58.683 01:33:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:58.941 [2024-11-21 01:33:42.699213] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:08:08.915 Executing: test_write_invalid_db 00:08:08.915 Waiting for AER completion... 00:08:08.915 Failure: test_write_invalid_db 00:08:08.915 00:08:08.915 Executing: test_invalid_db_write_overflow_sq 00:08:08.915 Waiting for AER completion... 00:08:08.915 Failure: test_invalid_db_write_overflow_sq 00:08:08.915 00:08:08.915 Executing: test_invalid_db_write_overflow_cq 00:08:08.915 Waiting for AER completion... 00:08:08.915 Failure: test_invalid_db_write_overflow_cq 00:08:08.915 00:08:08.915 01:33:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:08.915 01:33:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:08.915 [2024-11-21 01:33:52.729896] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:08:18.949 Executing: test_write_invalid_db 00:08:18.949 Waiting for AER completion... 00:08:18.949 Failure: test_write_invalid_db 00:08:18.949 00:08:18.949 Executing: test_invalid_db_write_overflow_sq 00:08:18.949 Waiting for AER completion... 00:08:18.949 Failure: test_invalid_db_write_overflow_sq 00:08:18.949 00:08:18.949 Executing: test_invalid_db_write_overflow_cq 00:08:18.949 Waiting for AER completion... 00:08:18.949 Failure: test_invalid_db_write_overflow_cq 00:08:18.949 00:08:18.949 01:34:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:18.949 01:34:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:18.949 [2024-11-21 01:34:02.728488] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:08:28.953 Executing: test_write_invalid_db 00:08:28.953 Waiting for AER completion... 00:08:28.953 Failure: test_write_invalid_db 00:08:28.953 00:08:28.953 Executing: test_invalid_db_write_overflow_sq 00:08:28.953 Waiting for AER completion... 00:08:28.953 Failure: test_invalid_db_write_overflow_sq 00:08:28.953 00:08:28.953 Executing: test_invalid_db_write_overflow_cq 00:08:28.953 Waiting for AER completion... 00:08:28.953 Failure: test_invalid_db_write_overflow_cq 00:08:28.953 00:08:28.953 ************************************ 00:08:28.953 END TEST nvme_doorbell_aers 00:08:28.953 ************************************ 00:08:28.953 00:08:28.953 real 0m40.193s 00:08:28.953 user 0m34.015s 00:08:28.953 sys 0m5.787s 00:08:28.953 01:34:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.953 01:34:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:28.953 01:34:12 nvme -- nvme/nvme.sh@97 -- # uname 00:08:28.953 01:34:12 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:28.953 01:34:12 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:28.953 01:34:12 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:28.953 01:34:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.953 01:34:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:28.953 ************************************ 00:08:28.953 START TEST nvme_multi_aen 00:08:28.953 ************************************ 00:08:28.953 01:34:12 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:28.953 [2024-11-21 01:34:12.783178] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:08:28.953 [2024-11-21 01:34:12.783777] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:08:28.953 [2024-11-21 01:34:12.783850] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:08:28.953 [2024-11-21 01:34:12.785406] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:08:28.953 [2024-11-21 01:34:12.785522] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:08:28.953 [2024-11-21 01:34:12.785571] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:08:28.953 [2024-11-21 01:34:12.786649] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:08:28.953 [2024-11-21 01:34:12.786735] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:08:28.953 [2024-11-21 01:34:12.786776] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:08:28.953 [2024-11-21 01:34:12.787965] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:08:28.953 [2024-11-21 01:34:12.788053] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:08:28.953 [2024-11-21 01:34:12.788094] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63123) is not found. Dropping the request. 00:08:28.953 Child process pid: 63645 00:08:29.214 [Child] Asynchronous Event Request test 00:08:29.215 [Child] Attached to 0000:00:11.0 00:08:29.215 [Child] Attached to 0000:00:13.0 00:08:29.215 [Child] Attached to 0000:00:10.0 00:08:29.215 [Child] Attached to 0000:00:12.0 00:08:29.215 [Child] Registering asynchronous event callbacks... 00:08:29.215 [Child] Getting orig temperature thresholds of all controllers 00:08:29.215 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:29.215 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:29.215 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:29.215 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:29.215 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:29.215 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:29.215 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:29.215 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:29.215 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:29.215 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:29.215 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:29.215 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:29.215 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:29.215 [Child] Cleaning up... 00:08:29.215 Asynchronous Event Request test 00:08:29.215 Attached to 0000:00:11.0 00:08:29.215 Attached to 0000:00:13.0 00:08:29.215 Attached to 0000:00:10.0 00:08:29.215 Attached to 0000:00:12.0 00:08:29.215 Reset controller to setup AER completions for this process 00:08:29.215 Registering asynchronous event callbacks... 00:08:29.215 Getting orig temperature thresholds of all controllers 00:08:29.215 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:29.215 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:29.215 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:29.215 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:29.215 Setting all controllers temperature threshold low to trigger AER 00:08:29.215 Waiting for all controllers temperature threshold to be set lower 00:08:29.215 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:29.215 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:29.215 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:29.215 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:29.215 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:29.215 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:29.215 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:29.215 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:29.215 Waiting for all controllers to trigger AER and reset threshold 00:08:29.215 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:29.215 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:29.215 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:29.215 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:29.215 Cleaning up... 00:08:29.215 ************************************ 00:08:29.215 END TEST nvme_multi_aen 00:08:29.215 ************************************ 00:08:29.215 00:08:29.215 real 0m0.455s 00:08:29.215 user 0m0.154s 00:08:29.215 sys 0m0.183s 00:08:29.215 01:34:13 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.215 01:34:13 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:29.215 01:34:13 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:29.215 01:34:13 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:29.215 01:34:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.215 01:34:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.215 ************************************ 00:08:29.215 START TEST nvme_startup 00:08:29.215 ************************************ 00:08:29.215 01:34:13 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:29.487 Initializing NVMe Controllers 00:08:29.487 Attached to 0000:00:11.0 00:08:29.487 Attached to 0000:00:13.0 00:08:29.487 Attached to 0000:00:10.0 00:08:29.487 Attached to 0000:00:12.0 00:08:29.487 Initialization complete. 00:08:29.487 Time used:148887.500 (us). 00:08:29.487 ************************************ 00:08:29.487 END TEST nvme_startup 00:08:29.487 ************************************ 00:08:29.487 00:08:29.487 real 0m0.213s 00:08:29.487 user 0m0.064s 00:08:29.487 sys 0m0.102s 00:08:29.487 01:34:13 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.487 01:34:13 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:29.487 01:34:13 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:29.487 01:34:13 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.487 01:34:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.487 01:34:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.487 ************************************ 00:08:29.487 START TEST nvme_multi_secondary 00:08:29.487 ************************************ 00:08:29.487 01:34:13 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:29.487 01:34:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63701 00:08:29.487 01:34:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:29.487 01:34:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63702 00:08:29.487 01:34:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:29.487 01:34:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:32.793 Initializing NVMe Controllers 00:08:32.793 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:32.793 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:32.793 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:32.793 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:32.793 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:32.793 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:32.793 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:32.793 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:32.793 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:32.793 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:32.793 Initialization complete. Launching workers. 00:08:32.793 ======================================================== 00:08:32.793 Latency(us) 00:08:32.793 Device Information : IOPS MiB/s Average min max 00:08:32.793 PCIE (0000:00:11.0) NSID 1 from core 1: 5208.75 20.35 3071.25 1398.75 6522.03 00:08:32.793 PCIE (0000:00:13.0) NSID 1 from core 1: 5208.75 20.35 3071.20 1185.57 7051.39 00:08:32.793 PCIE (0000:00:10.0) NSID 1 from core 1: 5208.75 20.35 3070.12 1357.00 7072.76 00:08:32.793 PCIE (0000:00:12.0) NSID 1 from core 1: 5208.75 20.35 3071.16 1247.72 6546.08 00:08:32.793 PCIE (0000:00:12.0) NSID 2 from core 1: 5208.75 20.35 3071.09 1081.80 7033.84 00:08:32.793 PCIE (0000:00:12.0) NSID 3 from core 1: 5208.75 20.35 3071.07 1032.63 6550.15 00:08:32.793 ======================================================== 00:08:32.793 Total : 31252.49 122.08 3070.99 1032.63 7072.76 00:08:32.793 00:08:32.793 Initializing NVMe Controllers 00:08:32.793 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:32.793 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:32.793 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:32.793 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:32.793 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:32.793 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:32.793 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:32.793 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:32.793 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:32.793 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:32.793 Initialization complete. Launching workers. 00:08:32.793 ======================================================== 00:08:32.793 Latency(us) 00:08:32.793 Device Information : IOPS MiB/s Average min max 00:08:32.793 PCIE (0000:00:11.0) NSID 1 from core 2: 1972.02 7.70 8113.00 1882.45 19152.33 00:08:32.793 PCIE (0000:00:13.0) NSID 1 from core 2: 1972.02 7.70 8113.26 1894.05 19011.92 00:08:32.793 PCIE (0000:00:10.0) NSID 1 from core 2: 1972.02 7.70 8112.65 1873.66 17867.50 00:08:32.793 PCIE (0000:00:12.0) NSID 1 from core 2: 1972.02 7.70 8112.07 2112.73 17662.19 00:08:32.793 PCIE (0000:00:12.0) NSID 2 from core 2: 1972.02 7.70 8113.66 1783.35 18090.90 00:08:32.793 PCIE (0000:00:12.0) NSID 3 from core 2: 1972.02 7.70 8112.68 1633.26 17861.36 00:08:32.793 ======================================================== 00:08:32.793 Total : 11832.12 46.22 8112.89 1633.26 19152.33 00:08:32.793 00:08:33.054 01:34:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63701 00:08:34.970 Initializing NVMe Controllers 00:08:34.970 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:34.970 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:34.970 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:34.970 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:34.970 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:34.970 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:34.970 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:34.970 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:34.970 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:34.970 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:34.970 Initialization complete. Launching workers. 00:08:34.970 ======================================================== 00:08:34.970 Latency(us) 00:08:34.970 Device Information : IOPS MiB/s Average min max 00:08:34.970 PCIE (0000:00:11.0) NSID 1 from core 0: 6489.50 25.35 2465.11 878.15 28551.48 00:08:34.970 PCIE (0000:00:13.0) NSID 1 from core 0: 6489.50 25.35 2465.13 842.79 28194.01 00:08:34.970 PCIE (0000:00:10.0) NSID 1 from core 0: 6489.50 25.35 2464.16 869.09 28528.69 00:08:34.970 PCIE (0000:00:12.0) NSID 1 from core 0: 6489.50 25.35 2465.04 814.79 28786.66 00:08:34.970 PCIE (0000:00:12.0) NSID 2 from core 0: 6489.50 25.35 2465.00 868.71 28167.35 00:08:34.970 PCIE (0000:00:12.0) NSID 3 from core 0: 6489.50 25.35 2464.96 773.27 28350.29 00:08:34.970 ======================================================== 00:08:34.970 Total : 38936.98 152.10 2464.90 773.27 28786.66 00:08:34.970 00:08:34.970 01:34:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63702 00:08:34.970 01:34:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63771 00:08:34.970 01:34:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:34.970 01:34:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63772 00:08:34.970 01:34:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:34.970 01:34:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:38.270 Initializing NVMe Controllers 00:08:38.270 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:38.270 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:38.270 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:38.270 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:38.270 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:38.270 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:38.270 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:38.270 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:38.270 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:38.270 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:38.270 Initialization complete. Launching workers. 00:08:38.270 ======================================================== 00:08:38.270 Latency(us) 00:08:38.270 Device Information : IOPS MiB/s Average min max 00:08:38.270 PCIE (0000:00:11.0) NSID 1 from core 0: 5882.86 22.98 2719.30 1083.71 9379.59 00:08:38.270 PCIE (0000:00:13.0) NSID 1 from core 0: 5882.86 22.98 2719.35 1078.53 9063.65 00:08:38.270 PCIE (0000:00:10.0) NSID 1 from core 0: 5882.86 22.98 2718.77 1083.88 8811.77 00:08:38.270 PCIE (0000:00:12.0) NSID 1 from core 0: 5882.86 22.98 2719.82 1012.46 8701.98 00:08:38.270 PCIE (0000:00:12.0) NSID 2 from core 0: 5882.86 22.98 2719.79 1080.60 8415.95 00:08:38.270 PCIE (0000:00:12.0) NSID 3 from core 0: 5882.86 22.98 2719.74 1040.99 8300.68 00:08:38.270 ======================================================== 00:08:38.270 Total : 35297.16 137.88 2719.46 1012.46 9379.59 00:08:38.270 00:08:38.271 Initializing NVMe Controllers 00:08:38.271 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:38.271 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:38.271 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:38.271 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:38.271 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:38.271 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:38.271 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:38.271 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:38.271 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:38.271 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:38.271 Initialization complete. Launching workers. 00:08:38.271 ======================================================== 00:08:38.271 Latency(us) 00:08:38.271 Device Information : IOPS MiB/s Average min max 00:08:38.271 PCIE (0000:00:11.0) NSID 1 from core 1: 6041.68 23.60 2647.80 755.19 6557.25 00:08:38.271 PCIE (0000:00:13.0) NSID 1 from core 1: 6041.68 23.60 2647.73 743.96 7109.37 00:08:38.271 PCIE (0000:00:10.0) NSID 1 from core 1: 6041.68 23.60 2646.59 717.57 7160.95 00:08:38.271 PCIE (0000:00:12.0) NSID 1 from core 1: 6041.68 23.60 2647.56 713.33 6635.51 00:08:38.271 PCIE (0000:00:12.0) NSID 2 from core 1: 6041.68 23.60 2647.48 606.42 7370.87 00:08:38.271 PCIE (0000:00:12.0) NSID 3 from core 1: 6041.68 23.60 2647.44 583.38 7171.82 00:08:38.271 ======================================================== 00:08:38.271 Total : 36250.09 141.60 2647.43 583.38 7370.87 00:08:38.271 00:08:40.176 Initializing NVMe Controllers 00:08:40.176 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:40.176 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:40.176 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:40.176 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:40.176 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:40.176 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:40.176 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:40.176 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:40.176 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:40.176 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:40.176 Initialization complete. Launching workers. 00:08:40.176 ======================================================== 00:08:40.176 Latency(us) 00:08:40.176 Device Information : IOPS MiB/s Average min max 00:08:40.176 PCIE (0000:00:11.0) NSID 1 from core 2: 4163.19 16.26 3842.85 753.27 16256.38 00:08:40.176 PCIE (0000:00:13.0) NSID 1 from core 2: 4163.19 16.26 3842.44 747.42 14415.47 00:08:40.176 PCIE (0000:00:10.0) NSID 1 from core 2: 4163.19 16.26 3841.59 736.68 13926.87 00:08:40.176 PCIE (0000:00:12.0) NSID 1 from core 2: 4163.19 16.26 3842.52 732.31 14402.11 00:08:40.176 PCIE (0000:00:12.0) NSID 2 from core 2: 4163.19 16.26 3842.66 667.56 17078.26 00:08:40.176 PCIE (0000:00:12.0) NSID 3 from core 2: 4163.19 16.26 3842.41 603.69 16966.28 00:08:40.176 ======================================================== 00:08:40.176 Total : 24979.14 97.57 3842.41 603.69 17078.26 00:08:40.176 00:08:40.176 ************************************ 00:08:40.177 END TEST nvme_multi_secondary 00:08:40.177 ************************************ 00:08:40.177 01:34:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63771 00:08:40.177 01:34:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63772 00:08:40.177 00:08:40.177 real 0m10.524s 00:08:40.177 user 0m18.390s 00:08:40.177 sys 0m0.647s 00:08:40.177 01:34:23 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.177 01:34:23 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:40.177 01:34:23 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:40.177 01:34:23 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:40.177 01:34:23 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62728 ]] 00:08:40.177 01:34:23 nvme -- common/autotest_common.sh@1094 -- # kill 62728 00:08:40.177 01:34:23 nvme -- common/autotest_common.sh@1095 -- # wait 62728 00:08:40.177 [2024-11-21 01:34:23.904133] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63644) is not found. Dropping the request. 00:08:40.177 [2024-11-21 01:34:23.904187] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63644) is not found. Dropping the request. 00:08:40.177 [2024-11-21 01:34:23.904206] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63644) is not found. Dropping the request. 00:08:40.177 [2024-11-21 01:34:23.904218] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63644) is not found. Dropping the request. 00:08:40.177 [2024-11-21 01:34:23.905863] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63644) is not found. Dropping the request. 00:08:40.177 [2024-11-21 01:34:23.905901] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63644) is not found. Dropping the request. 00:08:40.177 [2024-11-21 01:34:23.905912] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63644) is not found. Dropping the request. 00:08:40.177 [2024-11-21 01:34:23.905924] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63644) is not found. Dropping the request. 00:08:40.177 [2024-11-21 01:34:23.907520] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63644) is not found. Dropping the request. 00:08:40.177 [2024-11-21 01:34:23.907559] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63644) is not found. Dropping the request. 00:08:40.177 [2024-11-21 01:34:23.907570] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63644) is not found. Dropping the request. 00:08:40.177 [2024-11-21 01:34:23.907581] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63644) is not found. Dropping the request. 00:08:40.177 [2024-11-21 01:34:23.909198] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63644) is not found. Dropping the request. 00:08:40.177 [2024-11-21 01:34:23.909237] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63644) is not found. Dropping the request. 00:08:40.177 [2024-11-21 01:34:23.909248] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63644) is not found. Dropping the request. 00:08:40.177 [2024-11-21 01:34:23.909259] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63644) is not found. Dropping the request. 00:08:40.177 [2024-11-21 01:34:24.009715] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:08:40.177 01:34:24 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:40.177 01:34:24 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:40.177 01:34:24 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:40.177 01:34:24 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.177 01:34:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.177 01:34:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.177 ************************************ 00:08:40.177 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:40.177 ************************************ 00:08:40.177 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:40.177 * Looking for test storage... 00:08:40.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:40.177 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:40.177 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:40.177 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:40.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.436 --rc genhtml_branch_coverage=1 00:08:40.436 --rc genhtml_function_coverage=1 00:08:40.436 --rc genhtml_legend=1 00:08:40.436 --rc geninfo_all_blocks=1 00:08:40.436 --rc geninfo_unexecuted_blocks=1 00:08:40.436 00:08:40.436 ' 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:40.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.436 --rc genhtml_branch_coverage=1 00:08:40.436 --rc genhtml_function_coverage=1 00:08:40.436 --rc genhtml_legend=1 00:08:40.436 --rc geninfo_all_blocks=1 00:08:40.436 --rc geninfo_unexecuted_blocks=1 00:08:40.436 00:08:40.436 ' 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:40.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.436 --rc genhtml_branch_coverage=1 00:08:40.436 --rc genhtml_function_coverage=1 00:08:40.436 --rc genhtml_legend=1 00:08:40.436 --rc geninfo_all_blocks=1 00:08:40.436 --rc geninfo_unexecuted_blocks=1 00:08:40.436 00:08:40.436 ' 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:40.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.436 --rc genhtml_branch_coverage=1 00:08:40.436 --rc genhtml_function_coverage=1 00:08:40.436 --rc genhtml_legend=1 00:08:40.436 --rc geninfo_all_blocks=1 00:08:40.436 --rc geninfo_unexecuted_blocks=1 00:08:40.436 00:08:40.436 ' 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:40.436 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:40.437 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:40.437 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:40.437 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:40.437 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:40.437 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:40.437 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:40.437 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:40.437 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=63928 00:08:40.437 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:40.437 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 63928 00:08:40.437 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 63928 ']' 00:08:40.437 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:40.437 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.437 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.437 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.437 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.437 01:34:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:40.437 [2024-11-21 01:34:24.293362] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:08:40.437 [2024-11-21 01:34:24.293474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63928 ] 00:08:40.695 [2024-11-21 01:34:24.463511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.695 [2024-11-21 01:34:24.566582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.695 [2024-11-21 01:34:24.566677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.695 [2024-11-21 01:34:24.566856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.695 [2024-11-21 01:34:24.566963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.266 01:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.266 01:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:41.266 01:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:41.266 01:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.266 01:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:41.266 nvme0n1 00:08:41.266 01:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.266 01:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:41.266 01:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_EBvQi.txt 00:08:41.266 01:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:41.266 01:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.266 01:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:41.528 true 00:08:41.528 01:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.528 01:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:41.528 01:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732152865 00:08:41.528 01:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=63951 00:08:41.528 01:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:41.528 01:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:41.528 01:34:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:43.443 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:43.443 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.443 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:43.444 [2024-11-21 01:34:27.239927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:43.444 [2024-11-21 01:34:27.240178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:43.444 [2024-11-21 01:34:27.240202] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:43.444 [2024-11-21 01:34:27.240215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.444 [2024-11-21 01:34:27.242080] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.444 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 63951 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 63951 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 63951 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_EBvQi.txt 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_EBvQi.txt 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 63928 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 63928 ']' 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 63928 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63928 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.444 killing process with pid 63928 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63928' 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 63928 00:08:43.444 01:34:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 63928 00:08:44.832 01:34:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:44.832 01:34:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:44.832 00:08:44.832 real 0m4.499s 00:08:44.832 user 0m15.966s 00:08:44.832 sys 0m0.472s 00:08:44.832 01:34:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.832 ************************************ 00:08:44.832 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:44.832 ************************************ 00:08:44.832 01:34:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:44.832 01:34:28 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:44.832 01:34:28 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:44.832 01:34:28 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.832 01:34:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.832 01:34:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:44.832 ************************************ 00:08:44.832 START TEST nvme_fio 00:08:44.832 ************************************ 00:08:44.832 01:34:28 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:44.832 01:34:28 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:44.832 01:34:28 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:44.832 01:34:28 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:44.832 01:34:28 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:44.832 01:34:28 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:44.832 01:34:28 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:44.832 01:34:28 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:44.832 01:34:28 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:44.832 01:34:28 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:44.832 01:34:28 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:44.832 01:34:28 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:44.832 01:34:28 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:44.832 01:34:28 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:44.832 01:34:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:44.832 01:34:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:45.092 01:34:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:45.092 01:34:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:45.352 01:34:29 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:45.352 01:34:29 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:45.352 01:34:29 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:45.352 01:34:29 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:45.352 01:34:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:45.352 01:34:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:45.352 01:34:29 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:45.352 01:34:29 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:45.352 01:34:29 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:45.352 01:34:29 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:45.352 01:34:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:45.353 01:34:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:45.353 01:34:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:45.353 01:34:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:45.353 01:34:29 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:45.353 01:34:29 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:45.353 01:34:29 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:45.353 01:34:29 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:45.353 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:45.353 fio-3.35 00:08:45.353 Starting 1 thread 00:08:51.940 00:08:51.940 test: (groupid=0, jobs=1): err= 0: pid=64085: Thu Nov 21 01:34:35 2024 00:08:51.940 read: IOPS=24.2k, BW=94.7MiB/s (99.3MB/s)(190MiB/2001msec) 00:08:51.940 slat (nsec): min=3336, max=65920, avg=4939.38, stdev=2193.65 00:08:51.940 clat (usec): min=325, max=9290, avg=2629.90, stdev=803.19 00:08:51.940 lat (usec): min=329, max=9295, avg=2634.84, stdev=804.65 00:08:51.940 clat percentiles (usec): 00:08:51.940 | 1.00th=[ 1876], 5.00th=[ 2147], 10.00th=[ 2212], 20.00th=[ 2311], 00:08:51.940 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2474], 00:08:51.940 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 2933], 95.00th=[ 4555], 00:08:51.940 | 99.00th=[ 6259], 99.50th=[ 6587], 99.90th=[ 7373], 99.95th=[ 7635], 00:08:51.940 | 99.99th=[ 8455] 00:08:51.940 bw ( KiB/s): min=91240, max=100512, per=98.54%, avg=95560.00, stdev=4668.20, samples=3 00:08:51.940 iops : min=22810, max=25128, avg=23890.00, stdev=1167.05, samples=3 00:08:51.940 write: IOPS=24.1k, BW=94.1MiB/s (98.7MB/s)(188MiB/2001msec); 0 zone resets 00:08:51.940 slat (usec): min=3, max=142, avg= 5.22, stdev= 2.30 00:08:51.940 clat (usec): min=333, max=9707, avg=2643.91, stdev=824.64 00:08:51.940 lat (usec): min=337, max=9712, avg=2649.13, stdev=826.11 00:08:51.940 clat percentiles (usec): 00:08:51.940 | 1.00th=[ 1876], 5.00th=[ 2147], 10.00th=[ 2245], 20.00th=[ 2311], 00:08:51.940 | 30.00th=[ 2343], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:08:51.940 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 2966], 95.00th=[ 4686], 00:08:51.940 | 99.00th=[ 6325], 99.50th=[ 6652], 99.90th=[ 7504], 99.95th=[ 7701], 00:08:51.940 | 99.99th=[ 9372] 00:08:51.940 bw ( KiB/s): min=90752, max=100088, per=99.22%, avg=95602.67, stdev=4678.71, samples=3 00:08:51.940 iops : min=22688, max=25022, avg=23900.67, stdev=1169.68, samples=3 00:08:51.940 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:08:51.940 lat (msec) : 2=1.76%, 4=91.65%, 10=6.54% 00:08:51.940 cpu : usr=99.20%, sys=0.10%, ctx=4, majf=0, minf=607 00:08:51.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:51.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:51.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:51.940 issued rwts: total=48514,48203,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:51.940 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:51.940 00:08:51.940 Run status group 0 (all jobs): 00:08:51.940 READ: bw=94.7MiB/s (99.3MB/s), 94.7MiB/s-94.7MiB/s (99.3MB/s-99.3MB/s), io=190MiB (199MB), run=2001-2001msec 00:08:51.940 WRITE: bw=94.1MiB/s (98.7MB/s), 94.1MiB/s-94.1MiB/s (98.7MB/s-98.7MB/s), io=188MiB (197MB), run=2001-2001msec 00:08:51.940 ----------------------------------------------------- 00:08:51.940 Suppressions used: 00:08:51.940 count bytes template 00:08:51.940 1 32 /usr/src/fio/parse.c 00:08:51.940 1 8 libtcmalloc_minimal.so 00:08:51.940 ----------------------------------------------------- 00:08:51.940 00:08:51.940 01:34:35 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:51.940 01:34:35 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:51.940 01:34:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:51.940 01:34:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:52.201 01:34:35 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:52.201 01:34:35 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:52.461 01:34:36 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:52.461 01:34:36 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:52.461 01:34:36 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:52.461 01:34:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:52.461 01:34:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:52.461 01:34:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:52.461 01:34:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:52.461 01:34:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:52.461 01:34:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:52.461 01:34:36 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:52.461 01:34:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:52.461 01:34:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:52.461 01:34:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:52.461 01:34:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:52.461 01:34:36 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:52.461 01:34:36 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:52.461 01:34:36 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:52.461 01:34:36 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:52.461 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:52.461 fio-3.35 00:08:52.461 Starting 1 thread 00:08:59.099 00:08:59.099 test: (groupid=0, jobs=1): err= 0: pid=64141: Thu Nov 21 01:34:42 2024 00:08:59.099 read: IOPS=23.9k, BW=93.4MiB/s (98.0MB/s)(187MiB/2001msec) 00:08:59.099 slat (nsec): min=3319, max=63299, avg=5024.79, stdev=2285.50 00:08:59.099 clat (usec): min=662, max=13582, avg=2668.87, stdev=900.52 00:08:59.099 lat (usec): min=675, max=13630, avg=2673.89, stdev=902.08 00:08:59.099 clat percentiles (usec): 00:08:59.099 | 1.00th=[ 1876], 5.00th=[ 2114], 10.00th=[ 2180], 20.00th=[ 2245], 00:08:59.099 | 30.00th=[ 2311], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2474], 00:08:59.099 | 70.00th=[ 2540], 80.00th=[ 2638], 90.00th=[ 3326], 95.00th=[ 5014], 00:08:59.099 | 99.00th=[ 6325], 99.50th=[ 6587], 99.90th=[ 8356], 99.95th=[ 9765], 00:08:59.099 | 99.99th=[13173] 00:08:59.099 bw ( KiB/s): min=95200, max=98672, per=100.00%, avg=97445.33, stdev=1947.29, samples=3 00:08:59.099 iops : min=23800, max=24668, avg=24361.33, stdev=486.82, samples=3 00:08:59.099 write: IOPS=23.8k, BW=92.9MiB/s (97.4MB/s)(186MiB/2001msec); 0 zone resets 00:08:59.099 slat (usec): min=3, max=150, avg= 5.30, stdev= 2.41 00:08:59.099 clat (usec): min=1145, max=13357, avg=2677.43, stdev=906.57 00:08:59.099 lat (usec): min=1158, max=13405, avg=2682.72, stdev=908.15 00:08:59.099 clat percentiles (usec): 00:08:59.099 | 1.00th=[ 1893], 5.00th=[ 2114], 10.00th=[ 2180], 20.00th=[ 2245], 00:08:59.099 | 30.00th=[ 2311], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2474], 00:08:59.099 | 70.00th=[ 2540], 80.00th=[ 2671], 90.00th=[ 3359], 95.00th=[ 5014], 00:08:59.099 | 99.00th=[ 6390], 99.50th=[ 6652], 99.90th=[ 8356], 99.95th=[10290], 00:08:59.099 | 99.99th=[12780] 00:08:59.099 bw ( KiB/s): min=95104, max=99256, per=100.00%, avg=97450.67, stdev=2128.28, samples=3 00:08:59.099 iops : min=23776, max=24814, avg=24362.67, stdev=532.07, samples=3 00:08:59.099 lat (usec) : 750=0.01%, 1000=0.01% 00:08:59.099 lat (msec) : 2=1.83%, 4=90.46%, 10=7.66%, 20=0.05% 00:08:59.099 cpu : usr=99.25%, sys=0.10%, ctx=3, majf=0, minf=608 00:08:59.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:59.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:59.099 issued rwts: total=47870,47572,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.099 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:59.099 00:08:59.099 Run status group 0 (all jobs): 00:08:59.099 READ: bw=93.4MiB/s (98.0MB/s), 93.4MiB/s-93.4MiB/s (98.0MB/s-98.0MB/s), io=187MiB (196MB), run=2001-2001msec 00:08:59.099 WRITE: bw=92.9MiB/s (97.4MB/s), 92.9MiB/s-92.9MiB/s (97.4MB/s-97.4MB/s), io=186MiB (195MB), run=2001-2001msec 00:08:59.357 ----------------------------------------------------- 00:08:59.357 Suppressions used: 00:08:59.357 count bytes template 00:08:59.357 1 32 /usr/src/fio/parse.c 00:08:59.357 1 8 libtcmalloc_minimal.so 00:08:59.357 ----------------------------------------------------- 00:08:59.357 00:08:59.357 01:34:43 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:59.357 01:34:43 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:59.357 01:34:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:59.357 01:34:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:59.615 01:34:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:59.615 01:34:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:59.615 01:34:43 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:59.615 01:34:43 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:59.615 01:34:43 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:59.615 01:34:43 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:59.615 01:34:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:59.615 01:34:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:59.615 01:34:43 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:59.615 01:34:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:59.615 01:34:43 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:59.615 01:34:43 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:59.615 01:34:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:59.615 01:34:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:59.615 01:34:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:59.874 01:34:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:59.874 01:34:43 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:59.874 01:34:43 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:59.874 01:34:43 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:59.874 01:34:43 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:59.874 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:59.874 fio-3.35 00:08:59.874 Starting 1 thread 00:09:08.005 00:09:08.005 test: (groupid=0, jobs=1): err= 0: pid=64202: Thu Nov 21 01:34:51 2024 00:09:08.005 read: IOPS=24.2k, BW=94.3MiB/s (98.9MB/s)(189MiB/2001msec) 00:09:08.005 slat (nsec): min=3346, max=75146, avg=4939.52, stdev=2085.89 00:09:08.005 clat (usec): min=242, max=8289, avg=2645.03, stdev=722.98 00:09:08.005 lat (usec): min=247, max=8293, avg=2649.97, stdev=724.29 00:09:08.005 clat percentiles (usec): 00:09:08.005 | 1.00th=[ 1696], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:08.005 | 30.00th=[ 2409], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:09:08.005 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 3195], 95.00th=[ 4228], 00:09:08.005 | 99.00th=[ 5669], 99.50th=[ 5932], 99.90th=[ 7832], 99.95th=[ 7963], 00:09:08.005 | 99.99th=[ 8029] 00:09:08.005 bw ( KiB/s): min=93672, max=97312, per=99.32%, avg=95954.67, stdev=1988.61, samples=3 00:09:08.005 iops : min=23418, max=24328, avg=23988.67, stdev=497.15, samples=3 00:09:08.005 write: IOPS=24.0k, BW=93.7MiB/s (98.3MB/s)(188MiB/2001msec); 0 zone resets 00:09:08.005 slat (nsec): min=3466, max=68807, avg=5259.73, stdev=2059.84 00:09:08.005 clat (usec): min=269, max=8159, avg=2650.83, stdev=725.31 00:09:08.005 lat (usec): min=274, max=8164, avg=2656.09, stdev=726.58 00:09:08.005 clat percentiles (usec): 00:09:08.005 | 1.00th=[ 1680], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:08.005 | 30.00th=[ 2409], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:09:08.005 | 70.00th=[ 2507], 80.00th=[ 2638], 90.00th=[ 3261], 95.00th=[ 4293], 00:09:08.005 | 99.00th=[ 5669], 99.50th=[ 5932], 99.90th=[ 7046], 99.95th=[ 7898], 00:09:08.005 | 99.99th=[ 8029] 00:09:08.006 bw ( KiB/s): min=93480, max=98000, per=100.00%, avg=96042.67, stdev=2320.00, samples=3 00:09:08.006 iops : min=23370, max=24500, avg=24010.67, stdev=580.00, samples=3 00:09:08.006 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.04% 00:09:08.006 lat (msec) : 2=2.55%, 4=91.84%, 10=5.53% 00:09:08.006 cpu : usr=99.20%, sys=0.10%, ctx=2, majf=0, minf=608 00:09:08.006 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:08.006 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.006 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:08.006 issued rwts: total=48328,48004,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.006 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:08.006 00:09:08.006 Run status group 0 (all jobs): 00:09:08.006 READ: bw=94.3MiB/s (98.9MB/s), 94.3MiB/s-94.3MiB/s (98.9MB/s-98.9MB/s), io=189MiB (198MB), run=2001-2001msec 00:09:08.006 WRITE: bw=93.7MiB/s (98.3MB/s), 93.7MiB/s-93.7MiB/s (98.3MB/s-98.3MB/s), io=188MiB (197MB), run=2001-2001msec 00:09:08.006 ----------------------------------------------------- 00:09:08.006 Suppressions used: 00:09:08.006 count bytes template 00:09:08.006 1 32 /usr/src/fio/parse.c 00:09:08.006 1 8 libtcmalloc_minimal.so 00:09:08.006 ----------------------------------------------------- 00:09:08.006 00:09:08.006 01:34:51 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:08.006 01:34:51 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:08.006 01:34:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:08.006 01:34:51 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:08.006 01:34:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:08.006 01:34:51 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:08.006 01:34:51 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:08.006 01:34:51 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:08.006 01:34:51 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:08.006 01:34:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:08.006 01:34:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:08.006 01:34:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:08.006 01:34:51 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:08.006 01:34:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:08.006 01:34:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:08.006 01:34:51 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:08.006 01:34:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:08.006 01:34:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:08.006 01:34:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:08.006 01:34:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:08.006 01:34:51 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:08.006 01:34:51 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:08.006 01:34:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:08.006 01:34:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:08.267 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:08.267 fio-3.35 00:09:08.267 Starting 1 thread 00:09:20.494 00:09:20.494 test: (groupid=0, jobs=1): err= 0: pid=64264: Thu Nov 21 01:35:02 2024 00:09:20.494 read: IOPS=23.0k, BW=89.9MiB/s (94.2MB/s)(180MiB/2001msec) 00:09:20.494 slat (nsec): min=4227, max=79826, avg=5014.37, stdev=2184.05 00:09:20.494 clat (usec): min=214, max=9057, avg=2777.21, stdev=855.08 00:09:20.494 lat (usec): min=219, max=9095, avg=2782.22, stdev=856.39 00:09:20.494 clat percentiles (usec): 00:09:20.494 | 1.00th=[ 1975], 5.00th=[ 2147], 10.00th=[ 2212], 20.00th=[ 2311], 00:09:20.494 | 30.00th=[ 2409], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2638], 00:09:20.494 | 70.00th=[ 2737], 80.00th=[ 2900], 90.00th=[ 3490], 95.00th=[ 4621], 00:09:20.494 | 99.00th=[ 6718], 99.50th=[ 6980], 99.90th=[ 7767], 99.95th=[ 8225], 00:09:20.494 | 99.99th=[ 8979] 00:09:20.494 bw ( KiB/s): min=83032, max=100544, per=100.00%, avg=93157.33, stdev=9071.54, samples=3 00:09:20.494 iops : min=20758, max=25136, avg=23289.33, stdev=2267.88, samples=3 00:09:20.494 write: IOPS=22.9k, BW=89.3MiB/s (93.7MB/s)(179MiB/2001msec); 0 zone resets 00:09:20.494 slat (nsec): min=4319, max=63573, avg=5286.29, stdev=2155.22 00:09:20.494 clat (usec): min=359, max=8982, avg=2779.86, stdev=849.29 00:09:20.494 lat (usec): min=363, max=9005, avg=2785.15, stdev=850.60 00:09:20.494 clat percentiles (usec): 00:09:20.494 | 1.00th=[ 1958], 5.00th=[ 2147], 10.00th=[ 2212], 20.00th=[ 2311], 00:09:20.494 | 30.00th=[ 2409], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2638], 00:09:20.494 | 70.00th=[ 2737], 80.00th=[ 2900], 90.00th=[ 3490], 95.00th=[ 4555], 00:09:20.494 | 99.00th=[ 6718], 99.50th=[ 6980], 99.90th=[ 7767], 99.95th=[ 8356], 00:09:20.494 | 99.99th=[ 8848] 00:09:20.494 bw ( KiB/s): min=83960, max=100176, per=100.00%, avg=93253.33, stdev=8363.89, samples=3 00:09:20.494 iops : min=20990, max=25044, avg=23313.33, stdev=2090.97, samples=3 00:09:20.494 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:09:20.494 lat (msec) : 2=1.16%, 4=91.75%, 10=7.04% 00:09:20.494 cpu : usr=99.25%, sys=0.10%, ctx=3, majf=0, minf=605 00:09:20.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:20.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:20.494 issued rwts: total=46041,45764,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.494 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:20.494 00:09:20.494 Run status group 0 (all jobs): 00:09:20.494 READ: bw=89.9MiB/s (94.2MB/s), 89.9MiB/s-89.9MiB/s (94.2MB/s-94.2MB/s), io=180MiB (189MB), run=2001-2001msec 00:09:20.494 WRITE: bw=89.3MiB/s (93.7MB/s), 89.3MiB/s-89.3MiB/s (93.7MB/s-93.7MB/s), io=179MiB (187MB), run=2001-2001msec 00:09:20.494 ----------------------------------------------------- 00:09:20.494 Suppressions used: 00:09:20.494 count bytes template 00:09:20.494 1 32 /usr/src/fio/parse.c 00:09:20.494 1 8 libtcmalloc_minimal.so 00:09:20.494 ----------------------------------------------------- 00:09:20.494 00:09:20.494 01:35:02 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:20.494 01:35:02 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:20.494 00:09:20.494 real 0m33.955s 00:09:20.494 user 0m19.265s 00:09:20.494 sys 0m27.298s 00:09:20.494 ************************************ 00:09:20.494 END TEST nvme_fio 00:09:20.494 ************************************ 00:09:20.494 01:35:02 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.494 01:35:02 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:20.494 00:09:20.494 real 1m42.933s 00:09:20.494 user 3m39.052s 00:09:20.494 sys 0m37.834s 00:09:20.494 01:35:02 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.494 ************************************ 00:09:20.494 END TEST nvme 00:09:20.494 ************************************ 00:09:20.494 01:35:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:20.494 01:35:02 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:20.494 01:35:02 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:20.494 01:35:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.494 01:35:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.494 01:35:02 -- common/autotest_common.sh@10 -- # set +x 00:09:20.494 ************************************ 00:09:20.494 START TEST nvme_scc 00:09:20.494 ************************************ 00:09:20.494 01:35:02 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:20.494 * Looking for test storage... 00:09:20.494 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:20.494 01:35:02 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:20.494 01:35:02 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:20.494 01:35:02 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:20.494 01:35:02 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.494 01:35:02 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:20.494 01:35:02 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.494 01:35:02 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:20.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.494 --rc genhtml_branch_coverage=1 00:09:20.494 --rc genhtml_function_coverage=1 00:09:20.494 --rc genhtml_legend=1 00:09:20.494 --rc geninfo_all_blocks=1 00:09:20.494 --rc geninfo_unexecuted_blocks=1 00:09:20.494 00:09:20.494 ' 00:09:20.494 01:35:02 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:20.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.494 --rc genhtml_branch_coverage=1 00:09:20.494 --rc genhtml_function_coverage=1 00:09:20.494 --rc genhtml_legend=1 00:09:20.494 --rc geninfo_all_blocks=1 00:09:20.494 --rc geninfo_unexecuted_blocks=1 00:09:20.494 00:09:20.494 ' 00:09:20.494 01:35:02 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:20.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.494 --rc genhtml_branch_coverage=1 00:09:20.494 --rc genhtml_function_coverage=1 00:09:20.494 --rc genhtml_legend=1 00:09:20.494 --rc geninfo_all_blocks=1 00:09:20.494 --rc geninfo_unexecuted_blocks=1 00:09:20.494 00:09:20.494 ' 00:09:20.494 01:35:02 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:20.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.494 --rc genhtml_branch_coverage=1 00:09:20.494 --rc genhtml_function_coverage=1 00:09:20.494 --rc genhtml_legend=1 00:09:20.494 --rc geninfo_all_blocks=1 00:09:20.494 --rc geninfo_unexecuted_blocks=1 00:09:20.494 00:09:20.494 ' 00:09:20.494 01:35:02 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:20.495 01:35:02 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:20.495 01:35:02 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:20.495 01:35:02 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:20.495 01:35:02 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:20.495 01:35:02 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.495 01:35:02 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.495 01:35:02 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.495 01:35:02 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.495 01:35:02 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.495 01:35:02 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.495 01:35:02 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.495 01:35:02 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:20.495 01:35:02 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.495 01:35:02 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:20.495 01:35:02 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:20.495 01:35:02 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:20.495 01:35:02 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:20.495 01:35:02 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:20.495 01:35:02 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:20.495 01:35:02 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:20.495 01:35:02 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:20.495 01:35:02 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:20.495 01:35:02 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:20.495 01:35:02 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:20.495 01:35:02 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:20.495 01:35:02 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:20.495 01:35:02 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:20.495 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:20.495 Waiting for block devices as requested 00:09:20.495 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:20.495 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:20.495 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:20.495 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.699 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:24.699 01:35:08 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:24.699 01:35:08 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:24.699 01:35:08 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:24.699 01:35:08 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.699 01:35:08 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:24.699 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.700 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:24.701 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.702 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.973 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.974 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.975 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.976 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.977 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:24.978 01:35:08 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:24.978 01:35:08 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:24.978 01:35:08 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.978 01:35:08 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.978 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.979 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.980 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.981 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.982 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.983 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:24.984 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:24.985 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:24.986 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:24.987 01:35:08 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:24.988 01:35:08 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:24.988 01:35:08 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:24.988 01:35:08 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:24.988 01:35:08 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.988 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.989 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.990 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.991 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.992 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:24.993 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.994 01:35:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.995 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.996 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.997 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.998 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:24.999 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.000 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:25.001 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.002 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.003 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.004 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.005 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:25.281 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:25.282 01:35:08 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:25.282 01:35:08 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:25.282 01:35:08 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:25.282 01:35:08 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.282 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:25.283 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:25.284 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:25.285 01:35:08 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:25.285 01:35:08 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:25.286 01:35:08 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:25.286 01:35:08 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:25.286 01:35:08 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:25.286 01:35:08 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:25.286 01:35:08 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:25.286 01:35:08 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:25.286 01:35:08 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:25.286 01:35:08 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:25.286 01:35:08 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:25.286 01:35:08 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:25.546 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:26.118 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.118 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.118 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.118 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.378 01:35:10 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:26.378 01:35:10 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:26.378 01:35:10 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.378 01:35:10 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:26.378 ************************************ 00:09:26.378 START TEST nvme_simple_copy 00:09:26.378 ************************************ 00:09:26.378 01:35:10 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:26.637 Initializing NVMe Controllers 00:09:26.637 Attaching to 0000:00:10.0 00:09:26.637 Controller supports SCC. Attached to 0000:00:10.0 00:09:26.637 Namespace ID: 1 size: 6GB 00:09:26.638 Initialization complete. 00:09:26.638 00:09:26.638 Controller QEMU NVMe Ctrl (12340 ) 00:09:26.638 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:26.638 Namespace Block Size:4096 00:09:26.638 Writing LBAs 0 to 63 with Random Data 00:09:26.638 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:26.638 LBAs matching Written Data: 64 00:09:26.638 00:09:26.638 real 0m0.277s 00:09:26.638 user 0m0.103s 00:09:26.638 sys 0m0.071s 00:09:26.638 ************************************ 00:09:26.638 END TEST nvme_simple_copy 00:09:26.638 ************************************ 00:09:26.638 01:35:10 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.638 01:35:10 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:26.638 00:09:26.638 real 0m7.812s 00:09:26.638 user 0m1.126s 00:09:26.638 sys 0m1.411s 00:09:26.638 01:35:10 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.638 ************************************ 00:09:26.638 END TEST nvme_scc 00:09:26.638 ************************************ 00:09:26.638 01:35:10 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:26.638 01:35:10 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:26.638 01:35:10 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:26.638 01:35:10 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:26.638 01:35:10 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:26.638 01:35:10 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:26.638 01:35:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.638 01:35:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.638 01:35:10 -- common/autotest_common.sh@10 -- # set +x 00:09:26.638 ************************************ 00:09:26.638 START TEST nvme_fdp 00:09:26.638 ************************************ 00:09:26.638 01:35:10 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:26.638 * Looking for test storage... 00:09:26.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:26.638 01:35:10 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:26.638 01:35:10 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:26.638 01:35:10 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:26.899 01:35:10 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:26.899 01:35:10 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.899 01:35:10 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:26.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.899 --rc genhtml_branch_coverage=1 00:09:26.899 --rc genhtml_function_coverage=1 00:09:26.899 --rc genhtml_legend=1 00:09:26.899 --rc geninfo_all_blocks=1 00:09:26.899 --rc geninfo_unexecuted_blocks=1 00:09:26.899 00:09:26.899 ' 00:09:26.899 01:35:10 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:26.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.899 --rc genhtml_branch_coverage=1 00:09:26.899 --rc genhtml_function_coverage=1 00:09:26.899 --rc genhtml_legend=1 00:09:26.899 --rc geninfo_all_blocks=1 00:09:26.899 --rc geninfo_unexecuted_blocks=1 00:09:26.899 00:09:26.899 ' 00:09:26.899 01:35:10 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:26.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.899 --rc genhtml_branch_coverage=1 00:09:26.899 --rc genhtml_function_coverage=1 00:09:26.899 --rc genhtml_legend=1 00:09:26.899 --rc geninfo_all_blocks=1 00:09:26.899 --rc geninfo_unexecuted_blocks=1 00:09:26.899 00:09:26.899 ' 00:09:26.899 01:35:10 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:26.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.899 --rc genhtml_branch_coverage=1 00:09:26.899 --rc genhtml_function_coverage=1 00:09:26.899 --rc genhtml_legend=1 00:09:26.899 --rc geninfo_all_blocks=1 00:09:26.899 --rc geninfo_unexecuted_blocks=1 00:09:26.899 00:09:26.899 ' 00:09:26.899 01:35:10 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:26.899 01:35:10 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:26.899 01:35:10 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:26.899 01:35:10 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:26.899 01:35:10 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.899 01:35:10 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.899 01:35:10 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.900 01:35:10 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.900 01:35:10 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.900 01:35:10 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:26.900 01:35:10 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.900 01:35:10 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:26.900 01:35:10 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:26.900 01:35:10 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:26.900 01:35:10 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:26.900 01:35:10 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:26.900 01:35:10 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:26.900 01:35:10 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:26.900 01:35:10 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:26.900 01:35:10 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:26.900 01:35:10 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.900 01:35:10 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:27.160 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:27.160 Waiting for block devices as requested 00:09:27.422 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.422 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.422 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.683 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.999 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:32.999 01:35:16 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:32.999 01:35:16 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:32.999 01:35:16 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:32.999 01:35:16 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.999 01:35:16 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:32.999 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.000 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:33.001 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:33.002 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:33.003 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:33.004 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:33.005 01:35:16 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:33.005 01:35:16 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:33.005 01:35:16 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:33.005 01:35:16 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:33.005 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.006 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.007 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:33.008 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:33.009 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:33.010 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:33.011 01:35:16 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:33.011 01:35:16 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:33.011 01:35:16 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:33.011 01:35:16 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:33.011 01:35:16 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:33.012 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.013 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:33.014 01:35:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.015 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.016 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:33.017 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:33.018 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.019 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:33.020 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:33.021 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:33.022 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.023 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:33.024 01:35:16 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:33.024 01:35:16 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:33.024 01:35:16 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:33.024 01:35:16 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.024 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:33.025 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.026 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:33.027 01:35:16 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:33.027 01:35:16 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:33.288 01:35:16 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:33.288 01:35:16 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:33.288 01:35:16 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:33.288 01:35:16 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:33.550 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:34.122 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.122 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.122 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.122 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.382 01:35:18 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:34.382 01:35:18 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:34.382 01:35:18 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.382 01:35:18 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:34.382 ************************************ 00:09:34.382 START TEST nvme_flexible_data_placement 00:09:34.382 ************************************ 00:09:34.382 01:35:18 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:34.644 Initializing NVMe Controllers 00:09:34.644 Attaching to 0000:00:13.0 00:09:34.644 Controller supports FDP Attached to 0000:00:13.0 00:09:34.644 Namespace ID: 1 Endurance Group ID: 1 00:09:34.644 Initialization complete. 00:09:34.644 00:09:34.644 ================================== 00:09:34.644 == FDP tests for Namespace: #01 == 00:09:34.644 ================================== 00:09:34.644 00:09:34.644 Get Feature: FDP: 00:09:34.644 ================= 00:09:34.644 Enabled: Yes 00:09:34.644 FDP configuration Index: 0 00:09:34.644 00:09:34.644 FDP configurations log page 00:09:34.644 =========================== 00:09:34.644 Number of FDP configurations: 1 00:09:34.644 Version: 0 00:09:34.644 Size: 112 00:09:34.644 FDP Configuration Descriptor: 0 00:09:34.644 Descriptor Size: 96 00:09:34.644 Reclaim Group Identifier format: 2 00:09:34.644 FDP Volatile Write Cache: Not Present 00:09:34.644 FDP Configuration: Valid 00:09:34.644 Vendor Specific Size: 0 00:09:34.644 Number of Reclaim Groups: 2 00:09:34.644 Number of Recalim Unit Handles: 8 00:09:34.644 Max Placement Identifiers: 128 00:09:34.644 Number of Namespaces Suppprted: 256 00:09:34.644 Reclaim unit Nominal Size: 6000000 bytes 00:09:34.644 Estimated Reclaim Unit Time Limit: Not Reported 00:09:34.644 RUH Desc #000: RUH Type: Initially Isolated 00:09:34.644 RUH Desc #001: RUH Type: Initially Isolated 00:09:34.644 RUH Desc #002: RUH Type: Initially Isolated 00:09:34.644 RUH Desc #003: RUH Type: Initially Isolated 00:09:34.644 RUH Desc #004: RUH Type: Initially Isolated 00:09:34.644 RUH Desc #005: RUH Type: Initially Isolated 00:09:34.644 RUH Desc #006: RUH Type: Initially Isolated 00:09:34.644 RUH Desc #007: RUH Type: Initially Isolated 00:09:34.644 00:09:34.644 FDP reclaim unit handle usage log page 00:09:34.644 ====================================== 00:09:34.644 Number of Reclaim Unit Handles: 8 00:09:34.644 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:34.644 RUH Usage Desc #001: RUH Attributes: Unused 00:09:34.644 RUH Usage Desc #002: RUH Attributes: Unused 00:09:34.644 RUH Usage Desc #003: RUH Attributes: Unused 00:09:34.644 RUH Usage Desc #004: RUH Attributes: Unused 00:09:34.644 RUH Usage Desc #005: RUH Attributes: Unused 00:09:34.644 RUH Usage Desc #006: RUH Attributes: Unused 00:09:34.644 RUH Usage Desc #007: RUH Attributes: Unused 00:09:34.644 00:09:34.644 FDP statistics log page 00:09:34.644 ======================= 00:09:34.644 Host bytes with metadata written: 941682688 00:09:34.644 Media bytes with metadata written: 941924352 00:09:34.644 Media bytes erased: 0 00:09:34.644 00:09:34.644 FDP Reclaim unit handle status 00:09:34.644 ============================== 00:09:34.644 Number of RUHS descriptors: 2 00:09:34.644 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003df1 00:09:34.644 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:34.644 00:09:34.644 FDP write on placement id: 0 success 00:09:34.644 00:09:34.644 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:34.644 00:09:34.644 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:34.644 00:09:34.644 Get Feature: FDP Events for Placement handle: #0 00:09:34.644 ======================== 00:09:34.644 Number of FDP Events: 6 00:09:34.644 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:34.644 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:34.644 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:34.644 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:34.644 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:34.644 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:34.644 00:09:34.644 FDP events log page 00:09:34.644 =================== 00:09:34.644 Number of FDP events: 1 00:09:34.644 FDP Event #0: 00:09:34.644 Event Type: RU Not Written to Capacity 00:09:34.644 Placement Identifier: Valid 00:09:34.644 NSID: Valid 00:09:34.644 Location: Valid 00:09:34.644 Placement Identifier: 0 00:09:34.644 Event Timestamp: 8 00:09:34.644 Namespace Identifier: 1 00:09:34.644 Reclaim Group Identifier: 0 00:09:34.644 Reclaim Unit Handle Identifier: 0 00:09:34.644 00:09:34.644 FDP test passed 00:09:34.644 00:09:34.644 real 0m0.248s 00:09:34.644 user 0m0.067s 00:09:34.644 sys 0m0.078s 00:09:34.644 01:35:18 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.644 ************************************ 00:09:34.644 END TEST nvme_flexible_data_placement 00:09:34.644 ************************************ 00:09:34.644 01:35:18 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:34.644 ************************************ 00:09:34.644 END TEST nvme_fdp 00:09:34.644 ************************************ 00:09:34.644 00:09:34.644 real 0m7.923s 00:09:34.644 user 0m1.139s 00:09:34.644 sys 0m1.467s 00:09:34.644 01:35:18 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.644 01:35:18 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:34.644 01:35:18 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:34.644 01:35:18 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:34.644 01:35:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.644 01:35:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.644 01:35:18 -- common/autotest_common.sh@10 -- # set +x 00:09:34.644 ************************************ 00:09:34.644 START TEST nvme_rpc 00:09:34.644 ************************************ 00:09:34.644 01:35:18 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:34.645 * Looking for test storage... 00:09:34.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:34.645 01:35:18 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:34.645 01:35:18 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:34.645 01:35:18 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.906 01:35:18 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:34.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.906 --rc genhtml_branch_coverage=1 00:09:34.906 --rc genhtml_function_coverage=1 00:09:34.906 --rc genhtml_legend=1 00:09:34.906 --rc geninfo_all_blocks=1 00:09:34.906 --rc geninfo_unexecuted_blocks=1 00:09:34.906 00:09:34.906 ' 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:34.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.906 --rc genhtml_branch_coverage=1 00:09:34.906 --rc genhtml_function_coverage=1 00:09:34.906 --rc genhtml_legend=1 00:09:34.906 --rc geninfo_all_blocks=1 00:09:34.906 --rc geninfo_unexecuted_blocks=1 00:09:34.906 00:09:34.906 ' 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:34.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.906 --rc genhtml_branch_coverage=1 00:09:34.906 --rc genhtml_function_coverage=1 00:09:34.906 --rc genhtml_legend=1 00:09:34.906 --rc geninfo_all_blocks=1 00:09:34.906 --rc geninfo_unexecuted_blocks=1 00:09:34.906 00:09:34.906 ' 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:34.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.906 --rc genhtml_branch_coverage=1 00:09:34.906 --rc genhtml_function_coverage=1 00:09:34.906 --rc genhtml_legend=1 00:09:34.906 --rc geninfo_all_blocks=1 00:09:34.906 --rc geninfo_unexecuted_blocks=1 00:09:34.906 00:09:34.906 ' 00:09:34.906 01:35:18 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:34.906 01:35:18 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:34.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.906 01:35:18 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:34.906 01:35:18 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:34.906 01:35:18 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65651 00:09:34.906 01:35:18 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:34.906 01:35:18 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65651 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65651 ']' 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.906 01:35:18 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.906 [2024-11-21 01:35:18.789988] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:09:34.907 [2024-11-21 01:35:18.790368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65651 ] 00:09:35.167 [2024-11-21 01:35:18.956730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:35.167 [2024-11-21 01:35:19.086837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.167 [2024-11-21 01:35:19.086925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.110 01:35:19 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.110 01:35:19 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:36.110 01:35:19 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:36.110 Nvme0n1 00:09:36.370 01:35:20 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:36.370 01:35:20 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:36.370 request: 00:09:36.370 { 00:09:36.370 "bdev_name": "Nvme0n1", 00:09:36.370 "filename": "non_existing_file", 00:09:36.370 "method": "bdev_nvme_apply_firmware", 00:09:36.370 "req_id": 1 00:09:36.370 } 00:09:36.370 Got JSON-RPC error response 00:09:36.370 response: 00:09:36.370 { 00:09:36.370 "code": -32603, 00:09:36.370 "message": "open file failed." 00:09:36.370 } 00:09:36.370 01:35:20 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:36.370 01:35:20 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:36.370 01:35:20 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:36.631 01:35:20 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:36.631 01:35:20 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65651 00:09:36.631 01:35:20 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65651 ']' 00:09:36.631 01:35:20 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65651 00:09:36.631 01:35:20 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:36.631 01:35:20 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.631 01:35:20 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65651 00:09:36.631 killing process with pid 65651 00:09:36.631 01:35:20 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.631 01:35:20 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.631 01:35:20 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65651' 00:09:36.631 01:35:20 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65651 00:09:36.631 01:35:20 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65651 00:09:38.544 ************************************ 00:09:38.544 END TEST nvme_rpc 00:09:38.544 ************************************ 00:09:38.544 00:09:38.544 real 0m3.506s 00:09:38.544 user 0m6.590s 00:09:38.544 sys 0m0.654s 00:09:38.544 01:35:21 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.544 01:35:21 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.544 01:35:22 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:38.544 01:35:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.544 01:35:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.544 01:35:22 -- common/autotest_common.sh@10 -- # set +x 00:09:38.544 ************************************ 00:09:38.544 START TEST nvme_rpc_timeouts 00:09:38.544 ************************************ 00:09:38.544 01:35:22 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:38.544 * Looking for test storage... 00:09:38.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:38.544 01:35:22 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:38.544 01:35:22 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:38.544 01:35:22 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:38.544 01:35:22 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:38.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.544 01:35:22 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:38.545 01:35:22 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.545 01:35:22 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:38.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.545 --rc genhtml_branch_coverage=1 00:09:38.545 --rc genhtml_function_coverage=1 00:09:38.545 --rc genhtml_legend=1 00:09:38.545 --rc geninfo_all_blocks=1 00:09:38.545 --rc geninfo_unexecuted_blocks=1 00:09:38.545 00:09:38.545 ' 00:09:38.545 01:35:22 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:38.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.545 --rc genhtml_branch_coverage=1 00:09:38.545 --rc genhtml_function_coverage=1 00:09:38.545 --rc genhtml_legend=1 00:09:38.545 --rc geninfo_all_blocks=1 00:09:38.545 --rc geninfo_unexecuted_blocks=1 00:09:38.545 00:09:38.545 ' 00:09:38.545 01:35:22 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:38.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.545 --rc genhtml_branch_coverage=1 00:09:38.545 --rc genhtml_function_coverage=1 00:09:38.545 --rc genhtml_legend=1 00:09:38.545 --rc geninfo_all_blocks=1 00:09:38.545 --rc geninfo_unexecuted_blocks=1 00:09:38.545 00:09:38.545 ' 00:09:38.545 01:35:22 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:38.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.545 --rc genhtml_branch_coverage=1 00:09:38.545 --rc genhtml_function_coverage=1 00:09:38.545 --rc genhtml_legend=1 00:09:38.545 --rc geninfo_all_blocks=1 00:09:38.545 --rc geninfo_unexecuted_blocks=1 00:09:38.545 00:09:38.545 ' 00:09:38.545 01:35:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:38.545 01:35:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65717 00:09:38.545 01:35:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65717 00:09:38.545 01:35:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65749 00:09:38.545 01:35:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:38.545 01:35:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65749 00:09:38.545 01:35:22 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65749 ']' 00:09:38.545 01:35:22 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.545 01:35:22 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.545 01:35:22 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.545 01:35:22 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.545 01:35:22 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:38.545 01:35:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:38.545 [2024-11-21 01:35:22.273494] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:09:38.545 [2024-11-21 01:35:22.273671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65749 ] 00:09:38.545 [2024-11-21 01:35:22.435177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:38.807 [2024-11-21 01:35:22.525491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.807 [2024-11-21 01:35:22.525509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.379 Checking default timeout settings: 00:09:39.379 01:35:23 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.379 01:35:23 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:39.379 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:39.379 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:39.639 Making settings changes with rpc: 00:09:39.639 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:39.639 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:39.899 Check default vs. modified settings: 00:09:39.899 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:39.899 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:40.159 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:40.159 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:40.159 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65717 00:09:40.159 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:40.159 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:40.159 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65717 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:40.160 Setting action_on_timeout is changed as expected. 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65717 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65717 00:09:40.160 Setting timeout_us is changed as expected. 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65717 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65717 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:40.160 Setting timeout_admin_us is changed as expected. 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65717 /tmp/settings_modified_65717 00:09:40.160 01:35:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65749 00:09:40.160 01:35:23 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65749 ']' 00:09:40.160 01:35:23 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65749 00:09:40.160 01:35:23 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:40.160 01:35:23 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.160 01:35:23 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65749 00:09:40.160 killing process with pid 65749 00:09:40.160 01:35:23 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.160 01:35:23 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.160 01:35:23 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65749' 00:09:40.160 01:35:23 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65749 00:09:40.160 01:35:23 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65749 00:09:41.544 RPC TIMEOUT SETTING TEST PASSED. 00:09:41.544 01:35:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:41.544 ************************************ 00:09:41.544 END TEST nvme_rpc_timeouts 00:09:41.544 ************************************ 00:09:41.544 00:09:41.544 real 0m3.117s 00:09:41.544 user 0m6.040s 00:09:41.544 sys 0m0.513s 00:09:41.544 01:35:25 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.544 01:35:25 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:41.544 01:35:25 -- spdk/autotest.sh@239 -- # uname -s 00:09:41.544 01:35:25 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:41.544 01:35:25 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:41.544 01:35:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:41.544 01:35:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.544 01:35:25 -- common/autotest_common.sh@10 -- # set +x 00:09:41.544 ************************************ 00:09:41.544 START TEST sw_hotplug 00:09:41.544 ************************************ 00:09:41.544 01:35:25 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:41.544 * Looking for test storage... 00:09:41.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:41.544 01:35:25 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:41.544 01:35:25 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:41.544 01:35:25 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:41.544 01:35:25 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.544 01:35:25 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:41.544 01:35:25 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.544 01:35:25 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:41.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.544 --rc genhtml_branch_coverage=1 00:09:41.544 --rc genhtml_function_coverage=1 00:09:41.544 --rc genhtml_legend=1 00:09:41.544 --rc geninfo_all_blocks=1 00:09:41.544 --rc geninfo_unexecuted_blocks=1 00:09:41.544 00:09:41.544 ' 00:09:41.544 01:35:25 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:41.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.544 --rc genhtml_branch_coverage=1 00:09:41.544 --rc genhtml_function_coverage=1 00:09:41.544 --rc genhtml_legend=1 00:09:41.544 --rc geninfo_all_blocks=1 00:09:41.544 --rc geninfo_unexecuted_blocks=1 00:09:41.545 00:09:41.545 ' 00:09:41.545 01:35:25 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:41.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.545 --rc genhtml_branch_coverage=1 00:09:41.545 --rc genhtml_function_coverage=1 00:09:41.545 --rc genhtml_legend=1 00:09:41.545 --rc geninfo_all_blocks=1 00:09:41.545 --rc geninfo_unexecuted_blocks=1 00:09:41.545 00:09:41.545 ' 00:09:41.545 01:35:25 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:41.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.545 --rc genhtml_branch_coverage=1 00:09:41.545 --rc genhtml_function_coverage=1 00:09:41.545 --rc genhtml_legend=1 00:09:41.545 --rc geninfo_all_blocks=1 00:09:41.545 --rc geninfo_unexecuted_blocks=1 00:09:41.545 00:09:41.545 ' 00:09:41.545 01:35:25 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:41.806 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:42.067 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:42.067 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:42.067 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:42.067 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:42.067 01:35:25 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:42.067 01:35:25 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:42.068 01:35:25 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:42.068 01:35:25 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:42.068 01:35:25 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:42.068 01:35:25 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:42.068 01:35:25 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:42.068 01:35:25 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:42.329 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:42.590 Waiting for block devices as requested 00:09:42.590 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.590 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.590 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.850 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.155 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:48.155 01:35:31 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:48.155 01:35:31 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:48.416 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:48.416 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:48.416 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:48.678 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:48.939 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:48.939 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:48.939 01:35:32 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:48.939 01:35:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:48.939 01:35:32 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:48.939 01:35:32 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:48.939 01:35:32 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66600 00:09:48.939 01:35:32 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:48.939 01:35:32 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:48.939 01:35:32 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:48.939 01:35:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:48.939 01:35:32 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:48.939 01:35:32 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:48.939 01:35:32 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:48.939 01:35:32 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:48.939 01:35:32 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:48.939 01:35:32 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:48.939 01:35:32 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:48.939 01:35:32 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:48.939 01:35:32 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:48.939 01:35:32 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:49.200 Initializing NVMe Controllers 00:09:49.200 Attaching to 0000:00:10.0 00:09:49.200 Attaching to 0000:00:11.0 00:09:49.200 Attached to 0000:00:11.0 00:09:49.200 Attached to 0000:00:10.0 00:09:49.200 Initialization complete. Starting I/O... 00:09:49.200 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:49.200 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:49.200 00:09:50.144 QEMU NVMe Ctrl (12341 ): 2505 I/Os completed (+2505) 00:09:50.144 QEMU NVMe Ctrl (12340 ): 2511 I/Os completed (+2511) 00:09:50.144 00:09:51.529 QEMU NVMe Ctrl (12341 ): 5704 I/Os completed (+3199) 00:09:51.529 QEMU NVMe Ctrl (12340 ): 5642 I/Os completed (+3131) 00:09:51.529 00:09:52.473 QEMU NVMe Ctrl (12341 ): 9064 I/Os completed (+3360) 00:09:52.473 QEMU NVMe Ctrl (12340 ): 8980 I/Os completed (+3338) 00:09:52.473 00:09:53.432 QEMU NVMe Ctrl (12341 ): 12657 I/Os completed (+3593) 00:09:53.432 QEMU NVMe Ctrl (12340 ): 12550 I/Os completed (+3570) 00:09:53.432 00:09:54.385 QEMU NVMe Ctrl (12341 ): 16136 I/Os completed (+3479) 00:09:54.385 QEMU NVMe Ctrl (12340 ): 16047 I/Os completed (+3497) 00:09:54.385 00:09:54.956 01:35:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:54.957 01:35:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:54.957 01:35:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:54.957 [2024-11-21 01:35:38.872256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:54.957 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:54.957 [2024-11-21 01:35:38.874024] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.957 [2024-11-21 01:35:38.874071] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.957 [2024-11-21 01:35:38.874086] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.957 [2024-11-21 01:35:38.874102] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.957 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:54.957 [2024-11-21 01:35:38.875498] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.957 [2024-11-21 01:35:38.875537] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.957 [2024-11-21 01:35:38.875548] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.957 [2024-11-21 01:35:38.875559] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.957 01:35:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:54.957 01:35:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:54.957 [2024-11-21 01:35:38.896440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:54.957 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:54.957 [2024-11-21 01:35:38.897471] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.957 [2024-11-21 01:35:38.897570] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.957 [2024-11-21 01:35:38.897647] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.957 [2024-11-21 01:35:38.897674] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.957 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:54.957 [2024-11-21 01:35:38.899097] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.957 [2024-11-21 01:35:38.899186] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.957 [2024-11-21 01:35:38.899203] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.957 [2024-11-21 01:35:38.899215] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:54.957 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:54.957 EAL: Scan for (pci) bus failed. 00:09:55.217 01:35:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:55.217 01:35:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:55.217 01:35:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:55.217 01:35:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:55.217 01:35:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:55.218 00:09:55.218 01:35:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:55.218 01:35:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:55.218 01:35:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:55.218 01:35:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:55.218 01:35:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:55.218 Attaching to 0000:00:10.0 00:09:55.218 Attached to 0000:00:10.0 00:09:55.218 01:35:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:55.218 01:35:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:55.218 01:35:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:55.477 Attaching to 0000:00:11.0 00:09:55.477 Attached to 0000:00:11.0 00:09:56.421 QEMU NVMe Ctrl (12340 ): 2974 I/Os completed (+2974) 00:09:56.421 QEMU NVMe Ctrl (12341 ): 2610 I/Os completed (+2610) 00:09:56.421 00:09:57.362 QEMU NVMe Ctrl (12340 ): 5777 I/Os completed (+2803) 00:09:57.362 QEMU NVMe Ctrl (12341 ): 5412 I/Os completed (+2802) 00:09:57.362 00:09:58.298 QEMU NVMe Ctrl (12340 ): 9299 I/Os completed (+3522) 00:09:58.298 QEMU NVMe Ctrl (12341 ): 8931 I/Os completed (+3519) 00:09:58.298 00:09:59.241 QEMU NVMe Ctrl (12340 ): 12989 I/Os completed (+3690) 00:09:59.241 QEMU NVMe Ctrl (12341 ): 12624 I/Os completed (+3693) 00:09:59.241 00:10:00.182 QEMU NVMe Ctrl (12340 ): 16814 I/Os completed (+3825) 00:10:00.182 QEMU NVMe Ctrl (12341 ): 16444 I/Os completed (+3820) 00:10:00.182 00:10:01.119 QEMU NVMe Ctrl (12340 ): 20518 I/Os completed (+3704) 00:10:01.119 QEMU NVMe Ctrl (12341 ): 20132 I/Os completed (+3688) 00:10:01.119 00:10:02.506 QEMU NVMe Ctrl (12340 ): 23814 I/Os completed (+3296) 00:10:02.507 QEMU NVMe Ctrl (12341 ): 23427 I/Os completed (+3295) 00:10:02.507 00:10:03.452 QEMU NVMe Ctrl (12340 ): 26618 I/Os completed (+2804) 00:10:03.452 QEMU NVMe Ctrl (12341 ): 26231 I/Os completed (+2804) 00:10:03.452 00:10:04.395 QEMU NVMe Ctrl (12340 ): 29401 I/Os completed (+2783) 00:10:04.395 QEMU NVMe Ctrl (12341 ): 29032 I/Os completed (+2801) 00:10:04.395 00:10:05.340 QEMU NVMe Ctrl (12340 ): 32760 I/Os completed (+3359) 00:10:05.340 QEMU NVMe Ctrl (12341 ): 32392 I/Os completed (+3360) 00:10:05.340 00:10:06.282 QEMU NVMe Ctrl (12340 ): 36505 I/Os completed (+3745) 00:10:06.282 QEMU NVMe Ctrl (12341 ): 36134 I/Os completed (+3742) 00:10:06.282 00:10:07.232 QEMU NVMe Ctrl (12340 ): 40280 I/Os completed (+3775) 00:10:07.232 QEMU NVMe Ctrl (12341 ): 39903 I/Os completed (+3769) 00:10:07.232 00:10:07.232 01:35:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:07.232 01:35:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:07.232 01:35:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:07.232 01:35:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:07.232 [2024-11-21 01:35:51.172060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:07.232 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:07.232 [2024-11-21 01:35:51.172987] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.232 [2024-11-21 01:35:51.173020] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.232 [2024-11-21 01:35:51.173035] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.232 [2024-11-21 01:35:51.173049] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.232 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:07.232 [2024-11-21 01:35:51.174551] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.232 [2024-11-21 01:35:51.174593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.232 [2024-11-21 01:35:51.174605] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.232 [2024-11-21 01:35:51.174627] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.503 01:35:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:07.503 01:35:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:07.503 [2024-11-21 01:35:51.205398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:07.503 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:07.503 [2024-11-21 01:35:51.206256] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.503 [2024-11-21 01:35:51.206283] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.503 [2024-11-21 01:35:51.206300] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.503 [2024-11-21 01:35:51.206315] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.503 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:07.503 [2024-11-21 01:35:51.207644] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.503 [2024-11-21 01:35:51.207671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.503 [2024-11-21 01:35:51.207684] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.503 [2024-11-21 01:35:51.207696] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:07.503 01:35:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:07.503 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:07.503 EAL: Scan for (pci) bus failed. 00:10:07.504 01:35:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:07.504 01:35:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:07.504 01:35:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:07.504 01:35:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:07.504 01:35:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:07.504 01:35:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:07.504 01:35:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:07.504 01:35:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:07.504 01:35:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:07.504 Attaching to 0000:00:10.0 00:10:07.504 Attached to 0000:00:10.0 00:10:07.504 01:35:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:07.504 01:35:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:07.504 01:35:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:07.504 Attaching to 0000:00:11.0 00:10:07.504 Attached to 0000:00:11.0 00:10:08.439 QEMU NVMe Ctrl (12340 ): 2649 I/Os completed (+2649) 00:10:08.439 QEMU NVMe Ctrl (12341 ): 2294 I/Os completed (+2294) 00:10:08.439 00:10:09.373 QEMU NVMe Ctrl (12340 ): 5993 I/Os completed (+3344) 00:10:09.373 QEMU NVMe Ctrl (12341 ): 5694 I/Os completed (+3400) 00:10:09.373 00:10:10.323 QEMU NVMe Ctrl (12340 ): 9117 I/Os completed (+3124) 00:10:10.323 QEMU NVMe Ctrl (12341 ): 8840 I/Os completed (+3146) 00:10:10.323 00:10:11.270 QEMU NVMe Ctrl (12340 ): 11933 I/Os completed (+2816) 00:10:11.270 QEMU NVMe Ctrl (12341 ): 11656 I/Os completed (+2816) 00:10:11.270 00:10:12.213 QEMU NVMe Ctrl (12340 ): 14687 I/Os completed (+2754) 00:10:12.213 QEMU NVMe Ctrl (12341 ): 14409 I/Os completed (+2753) 00:10:12.213 00:10:13.153 QEMU NVMe Ctrl (12340 ): 17891 I/Os completed (+3204) 00:10:13.153 QEMU NVMe Ctrl (12341 ): 17613 I/Os completed (+3204) 00:10:13.153 00:10:14.536 QEMU NVMe Ctrl (12340 ): 20971 I/Os completed (+3080) 00:10:14.536 QEMU NVMe Ctrl (12341 ): 20692 I/Os completed (+3079) 00:10:14.536 00:10:15.478 QEMU NVMe Ctrl (12340 ): 24063 I/Os completed (+3092) 00:10:15.478 QEMU NVMe Ctrl (12341 ): 23783 I/Os completed (+3091) 00:10:15.478 00:10:16.419 QEMU NVMe Ctrl (12340 ): 27287 I/Os completed (+3224) 00:10:16.419 QEMU NVMe Ctrl (12341 ): 27013 I/Os completed (+3230) 00:10:16.419 00:10:17.362 QEMU NVMe Ctrl (12340 ): 30988 I/Os completed (+3701) 00:10:17.362 QEMU NVMe Ctrl (12341 ): 30714 I/Os completed (+3701) 00:10:17.363 00:10:18.305 QEMU NVMe Ctrl (12340 ): 34619 I/Os completed (+3631) 00:10:18.305 QEMU NVMe Ctrl (12341 ): 34344 I/Os completed (+3630) 00:10:18.305 00:10:19.249 QEMU NVMe Ctrl (12340 ): 38286 I/Os completed (+3667) 00:10:19.249 QEMU NVMe Ctrl (12341 ): 38005 I/Os completed (+3661) 00:10:19.249 00:10:19.511 01:36:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:19.511 01:36:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:19.511 01:36:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:19.511 01:36:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:19.511 [2024-11-21 01:36:03.451648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:19.511 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:19.511 [2024-11-21 01:36:03.452594] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.511 [2024-11-21 01:36:03.452661] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.511 [2024-11-21 01:36:03.452676] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.511 [2024-11-21 01:36:03.452694] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.511 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:19.511 [2024-11-21 01:36:03.454265] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.511 [2024-11-21 01:36:03.454300] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.511 [2024-11-21 01:36:03.454312] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.511 [2024-11-21 01:36:03.454323] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.773 01:36:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:19.773 01:36:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:19.773 [2024-11-21 01:36:03.472663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:19.773 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:19.773 [2024-11-21 01:36:03.473511] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.773 [2024-11-21 01:36:03.473548] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.773 [2024-11-21 01:36:03.473563] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.773 [2024-11-21 01:36:03.473578] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.773 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:19.773 [2024-11-21 01:36:03.474933] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.773 [2024-11-21 01:36:03.474959] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.773 [2024-11-21 01:36:03.474973] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.773 [2024-11-21 01:36:03.474983] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:19.773 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:19.773 EAL: Scan for (pci) bus failed. 00:10:19.773 01:36:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:19.773 01:36:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:19.773 01:36:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:19.773 01:36:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:19.773 01:36:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:19.773 01:36:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:19.773 01:36:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:19.773 01:36:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:19.773 01:36:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:19.773 01:36:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:19.773 Attaching to 0000:00:10.0 00:10:19.773 Attached to 0000:00:10.0 00:10:19.773 01:36:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:19.773 01:36:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:19.773 01:36:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:19.773 Attaching to 0000:00:11.0 00:10:19.773 Attached to 0000:00:11.0 00:10:19.773 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:19.773 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:19.773 [2024-11-21 01:36:03.725105] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:32.011 01:36:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:32.011 01:36:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:32.011 01:36:15 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.85 00:10:32.011 01:36:15 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.85 00:10:32.011 01:36:15 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:32.011 01:36:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.85 00:10:32.011 01:36:15 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.85 2 00:10:32.011 remove_attach_helper took 42.85s to complete (handling 2 nvme drive(s)) 01:36:15 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:38.643 01:36:21 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66600 00:10:38.643 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66600) - No such process 00:10:38.643 01:36:21 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66600 00:10:38.643 01:36:21 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:38.643 01:36:21 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:38.643 01:36:21 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:38.643 01:36:21 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67149 00:10:38.643 01:36:21 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:38.643 01:36:21 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:38.643 01:36:21 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67149 00:10:38.643 01:36:21 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67149 ']' 00:10:38.643 01:36:21 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.643 01:36:21 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.643 01:36:21 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.643 01:36:21 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.643 01:36:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.643 [2024-11-21 01:36:21.813765] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:10:38.643 [2024-11-21 01:36:21.813914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67149 ] 00:10:38.643 [2024-11-21 01:36:21.978896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.643 [2024-11-21 01:36:22.101784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.905 01:36:22 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.905 01:36:22 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:38.905 01:36:22 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:38.905 01:36:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.905 01:36:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.905 01:36:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.905 01:36:22 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:38.905 01:36:22 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:38.905 01:36:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:38.905 01:36:22 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:38.905 01:36:22 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:38.905 01:36:22 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:38.905 01:36:22 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:38.905 01:36:22 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:38.905 01:36:22 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:38.905 01:36:22 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:38.905 01:36:22 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:38.905 01:36:22 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:38.905 01:36:22 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:45.491 01:36:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:45.491 01:36:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:45.491 01:36:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:45.491 01:36:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:45.491 01:36:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:45.491 01:36:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:45.491 01:36:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:45.491 01:36:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:45.491 01:36:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:45.491 01:36:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:45.491 01:36:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:45.491 01:36:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.491 01:36:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:45.491 01:36:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.491 01:36:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:45.491 01:36:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:45.491 [2024-11-21 01:36:28.886669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:45.491 [2024-11-21 01:36:28.887881] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.491 [2024-11-21 01:36:28.887916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.492 [2024-11-21 01:36:28.887929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.492 [2024-11-21 01:36:28.887947] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.492 [2024-11-21 01:36:28.887955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.492 [2024-11-21 01:36:28.887963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.492 [2024-11-21 01:36:28.887970] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.492 [2024-11-21 01:36:28.887978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.492 [2024-11-21 01:36:28.887985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.492 [2024-11-21 01:36:28.887996] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.492 [2024-11-21 01:36:28.888002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.492 [2024-11-21 01:36:28.888010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.492 01:36:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:45.492 01:36:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:45.492 01:36:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:45.492 01:36:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:45.492 01:36:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:45.492 01:36:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:45.492 01:36:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.492 01:36:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:45.492 01:36:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.492 01:36:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:45.492 01:36:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:45.752 [2024-11-21 01:36:29.486662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:45.752 [2024-11-21 01:36:29.487830] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.752 [2024-11-21 01:36:29.487860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.752 [2024-11-21 01:36:29.487870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.752 [2024-11-21 01:36:29.487882] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.752 [2024-11-21 01:36:29.487891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.752 [2024-11-21 01:36:29.487898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.752 [2024-11-21 01:36:29.487906] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.752 [2024-11-21 01:36:29.487912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.752 [2024-11-21 01:36:29.487921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:45.752 [2024-11-21 01:36:29.487927] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:45.752 [2024-11-21 01:36:29.487935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:45.752 [2024-11-21 01:36:29.487941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:46.013 01:36:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:46.013 01:36:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:46.013 01:36:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:46.013 01:36:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:46.013 01:36:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:46.013 01:36:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:46.013 01:36:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.013 01:36:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:46.013 01:36:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.013 01:36:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:46.013 01:36:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:46.274 01:36:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:46.275 01:36:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:46.275 01:36:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:46.275 01:36:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:46.275 01:36:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:46.275 01:36:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:46.275 01:36:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:46.275 01:36:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:46.275 01:36:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:46.275 01:36:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:46.275 01:36:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:58.513 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:58.513 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:58.513 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:58.513 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:58.513 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:58.513 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:58.513 01:36:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.513 01:36:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:58.513 01:36:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.513 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:58.513 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:58.513 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:58.513 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:58.513 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:58.513 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:58.513 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:58.513 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:58.513 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:58.513 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:58.513 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:58.513 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:58.513 01:36:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.513 01:36:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:58.513 01:36:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.513 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:58.513 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:58.513 [2024-11-21 01:36:42.286847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:58.513 [2024-11-21 01:36:42.288024] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.513 [2024-11-21 01:36:42.288060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.513 [2024-11-21 01:36:42.288078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.513 [2024-11-21 01:36:42.288095] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.513 [2024-11-21 01:36:42.288102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.513 [2024-11-21 01:36:42.288111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.513 [2024-11-21 01:36:42.288118] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.513 [2024-11-21 01:36:42.288126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.513 [2024-11-21 01:36:42.288132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.513 [2024-11-21 01:36:42.288140] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.513 [2024-11-21 01:36:42.288147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.513 [2024-11-21 01:36:42.288155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.774 [2024-11-21 01:36:42.686843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:58.774 [2024-11-21 01:36:42.687999] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.774 [2024-11-21 01:36:42.688029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.774 [2024-11-21 01:36:42.688042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.774 [2024-11-21 01:36:42.688053] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.774 [2024-11-21 01:36:42.688062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.774 [2024-11-21 01:36:42.688084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.774 [2024-11-21 01:36:42.688094] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.774 [2024-11-21 01:36:42.688101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.774 [2024-11-21 01:36:42.688108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:58.774 [2024-11-21 01:36:42.688115] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:58.774 [2024-11-21 01:36:42.688123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:58.774 [2024-11-21 01:36:42.688129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.033 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:59.033 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:59.033 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:59.033 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:59.033 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:59.033 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:59.033 01:36:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.033 01:36:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:59.033 01:36:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.033 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:59.033 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:59.033 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:59.033 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:59.033 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:59.033 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:59.033 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:59.033 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:59.033 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:59.033 01:36:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:59.293 01:36:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:59.293 01:36:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:59.293 01:36:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:11.528 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:11.528 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:11.528 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:11.528 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:11.528 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:11.528 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:11.528 01:36:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.528 01:36:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:11.528 01:36:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.528 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:11.528 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:11.528 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:11.528 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:11.528 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:11.528 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:11.528 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:11.528 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:11.528 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:11.528 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:11.528 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:11.528 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:11.528 01:36:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.528 01:36:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:11.528 01:36:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.528 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:11.528 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:11.528 [2024-11-21 01:36:55.187028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:11.528 [2024-11-21 01:36:55.188223] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.528 [2024-11-21 01:36:55.188259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.528 [2024-11-21 01:36:55.188270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.528 [2024-11-21 01:36:55.188286] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.528 [2024-11-21 01:36:55.188293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.529 [2024-11-21 01:36:55.188303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.529 [2024-11-21 01:36:55.188310] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.529 [2024-11-21 01:36:55.188317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.529 [2024-11-21 01:36:55.188323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.529 [2024-11-21 01:36:55.188331] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.529 [2024-11-21 01:36:55.188337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.529 [2024-11-21 01:36:55.188344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.790 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:11.790 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:11.790 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:11.790 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:11.790 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:11.790 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:11.790 01:36:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.790 01:36:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:11.790 [2024-11-21 01:36:55.687029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:11.790 [2024-11-21 01:36:55.688179] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.790 [2024-11-21 01:36:55.688208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.790 [2024-11-21 01:36:55.688220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.790 [2024-11-21 01:36:55.688232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.790 [2024-11-21 01:36:55.688241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.790 [2024-11-21 01:36:55.688248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.790 [2024-11-21 01:36:55.688257] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.790 [2024-11-21 01:36:55.688263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.790 [2024-11-21 01:36:55.688272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.790 [2024-11-21 01:36:55.688280] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.790 [2024-11-21 01:36:55.688288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.790 [2024-11-21 01:36:55.688294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.790 01:36:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.790 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:11.790 01:36:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:12.362 01:36:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:12.362 01:36:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:12.362 01:36:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:12.362 01:36:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:12.362 01:36:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:12.362 01:36:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:12.362 01:36:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.362 01:36:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:12.362 01:36:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.362 01:36:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:12.362 01:36:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:12.624 01:36:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:12.624 01:36:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:12.624 01:36:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:12.624 01:36:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:12.624 01:36:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:12.624 01:36:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:12.624 01:36:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:12.624 01:36:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:12.624 01:36:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:12.624 01:36:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:12.624 01:36:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:24.857 01:37:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:24.857 01:37:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:24.857 01:37:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:24.857 01:37:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.857 01:37:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.857 01:37:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.857 01:37:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.857 01:37:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.857 01:37:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.857 01:37:08 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:24.857 01:37:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:24.857 01:37:08 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.73 00:11:24.857 01:37:08 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.73 00:11:24.857 01:37:08 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:24.857 01:37:08 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.73 00:11:24.857 01:37:08 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.73 2 00:11:24.857 remove_attach_helper took 45.73s to complete (handling 2 nvme drive(s)) 01:37:08 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:24.857 01:37:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.857 01:37:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.857 01:37:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.857 01:37:08 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:24.857 01:37:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.857 01:37:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.857 01:37:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.857 01:37:08 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:24.857 01:37:08 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:24.857 01:37:08 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:24.857 01:37:08 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:24.857 01:37:08 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:24.857 01:37:08 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:24.857 01:37:08 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:24.857 01:37:08 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:24.857 01:37:08 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:24.857 01:37:08 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:24.857 01:37:08 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:24.857 01:37:08 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:24.857 01:37:08 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:31.471 01:37:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:31.471 01:37:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.471 01:37:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.471 01:37:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.471 01:37:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.471 01:37:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:31.471 01:37:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:31.471 01:37:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:31.471 01:37:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.471 01:37:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.471 01:37:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.471 01:37:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.471 01:37:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.471 01:37:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.471 01:37:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:31.471 01:37:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:31.471 [2024-11-21 01:37:14.650843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:31.471 [2024-11-21 01:37:14.651750] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.471 [2024-11-21 01:37:14.651781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.471 [2024-11-21 01:37:14.651791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.471 [2024-11-21 01:37:14.651808] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.471 [2024-11-21 01:37:14.651815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.471 [2024-11-21 01:37:14.651823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.471 [2024-11-21 01:37:14.651830] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.471 [2024-11-21 01:37:14.651837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.471 [2024-11-21 01:37:14.651844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.471 [2024-11-21 01:37:14.651852] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.471 [2024-11-21 01:37:14.651859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.471 [2024-11-21 01:37:14.651868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.471 [2024-11-21 01:37:15.050844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:31.471 [2024-11-21 01:37:15.051711] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.471 [2024-11-21 01:37:15.051740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.471 [2024-11-21 01:37:15.051751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.471 [2024-11-21 01:37:15.051761] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.471 [2024-11-21 01:37:15.051770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.471 [2024-11-21 01:37:15.051777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.471 [2024-11-21 01:37:15.051786] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.471 [2024-11-21 01:37:15.051793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.471 [2024-11-21 01:37:15.051800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.471 [2024-11-21 01:37:15.051808] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.471 [2024-11-21 01:37:15.051816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.471 [2024-11-21 01:37:15.051822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.471 01:37:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:31.471 01:37:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:31.471 01:37:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:31.471 01:37:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.471 01:37:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.471 01:37:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.471 01:37:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.471 01:37:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.471 01:37:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.471 01:37:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:31.471 01:37:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:31.471 01:37:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.471 01:37:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.471 01:37:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:31.471 01:37:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:31.471 01:37:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.471 01:37:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.471 01:37:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.471 01:37:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:31.471 01:37:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:31.471 01:37:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.471 01:37:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:43.841 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:43.841 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:43.841 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:43.841 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:43.841 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.841 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:43.841 01:37:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.841 01:37:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.841 01:37:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.841 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:43.841 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:43.841 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:43.841 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:43.841 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:43.841 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:43.841 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:43.841 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:43.841 [2024-11-21 01:37:27.451065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:43.841 [2024-11-21 01:37:27.452285] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.841 [2024-11-21 01:37:27.452318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.841 [2024-11-21 01:37:27.452329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.841 [2024-11-21 01:37:27.452345] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.841 [2024-11-21 01:37:27.452352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.841 [2024-11-21 01:37:27.452361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.841 [2024-11-21 01:37:27.452368] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.841 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:43.841 [2024-11-21 01:37:27.452375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.841 [2024-11-21 01:37:27.452382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.841 [2024-11-21 01:37:27.452390] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.841 [2024-11-21 01:37:27.452396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.841 [2024-11-21 01:37:27.452404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.841 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:43.841 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:43.842 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.842 01:37:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.842 01:37:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.842 01:37:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.842 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:43.842 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:44.103 [2024-11-21 01:37:27.851064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:44.103 [2024-11-21 01:37:27.851925] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.103 [2024-11-21 01:37:27.851956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.103 [2024-11-21 01:37:27.851968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.103 [2024-11-21 01:37:27.851980] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.103 [2024-11-21 01:37:27.851991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.103 [2024-11-21 01:37:27.851998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.103 [2024-11-21 01:37:27.852008] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.103 [2024-11-21 01:37:27.852014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.103 [2024-11-21 01:37:27.852022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.103 [2024-11-21 01:37:27.852029] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.103 [2024-11-21 01:37:27.852037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.103 [2024-11-21 01:37:27.852043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.103 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:44.103 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:44.103 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:44.103 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:44.103 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:44.103 01:37:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:44.103 01:37:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.103 01:37:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:44.103 01:37:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.103 01:37:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:44.103 01:37:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:44.364 01:37:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:44.364 01:37:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:44.364 01:37:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:44.364 01:37:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:44.364 01:37:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:44.364 01:37:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:44.364 01:37:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:44.364 01:37:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:44.364 01:37:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:44.364 01:37:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:44.364 01:37:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:56.605 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:56.605 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:56.605 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:56.605 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.605 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.605 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.605 01:37:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.605 01:37:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.605 01:37:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.605 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:56.605 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:56.605 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:56.605 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:56.605 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:56.605 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:56.605 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:56.605 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:56.605 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:56.605 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.605 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.605 01:37:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.605 01:37:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.605 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.605 [2024-11-21 01:37:40.351351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:56.605 [2024-11-21 01:37:40.352481] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.605 [2024-11-21 01:37:40.352513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.605 [2024-11-21 01:37:40.352524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.605 [2024-11-21 01:37:40.352541] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.605 [2024-11-21 01:37:40.352548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.605 [2024-11-21 01:37:40.352557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.605 [2024-11-21 01:37:40.352564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.605 [2024-11-21 01:37:40.352573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.605 [2024-11-21 01:37:40.352580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.605 [2024-11-21 01:37:40.352588] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.605 [2024-11-21 01:37:40.352594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.605 [2024-11-21 01:37:40.352602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.605 01:37:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.605 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:56.605 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:56.867 [2024-11-21 01:37:40.751348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:56.867 [2024-11-21 01:37:40.753732] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.867 [2024-11-21 01:37:40.753764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.867 [2024-11-21 01:37:40.753776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.867 [2024-11-21 01:37:40.753788] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.867 [2024-11-21 01:37:40.753799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.867 [2024-11-21 01:37:40.753806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.867 [2024-11-21 01:37:40.753815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.867 [2024-11-21 01:37:40.753822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.867 [2024-11-21 01:37:40.753830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.867 [2024-11-21 01:37:40.753838] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.867 [2024-11-21 01:37:40.753847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.867 [2024-11-21 01:37:40.753853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.128 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:57.128 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:57.128 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:57.128 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:57.128 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:57.128 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:57.128 01:37:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.128 01:37:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.128 01:37:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.128 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:57.128 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:57.128 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:57.128 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:57.128 01:37:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:57.128 01:37:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:57.128 01:37:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:57.128 01:37:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:57.128 01:37:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:57.128 01:37:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:57.389 01:37:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:57.389 01:37:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:57.389 01:37:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:09.625 01:37:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:09.625 01:37:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:09.625 01:37:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:09.625 01:37:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:09.625 01:37:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:09.625 01:37:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:09.625 01:37:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.625 01:37:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.625 01:37:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.625 01:37:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:09.625 01:37:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:09.625 01:37:53 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.61 00:12:09.625 01:37:53 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.61 00:12:09.625 01:37:53 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:09.625 01:37:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.61 00:12:09.625 01:37:53 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.61 2 00:12:09.625 remove_attach_helper took 44.61s to complete (handling 2 nvme drive(s)) 01:37:53 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:09.625 01:37:53 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67149 00:12:09.625 01:37:53 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67149 ']' 00:12:09.625 01:37:53 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67149 00:12:09.625 01:37:53 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:09.625 01:37:53 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.625 01:37:53 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67149 00:12:09.625 01:37:53 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.625 killing process with pid 67149 00:12:09.625 01:37:53 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.625 01:37:53 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67149' 00:12:09.625 01:37:53 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67149 00:12:09.625 01:37:53 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67149 00:12:10.568 01:37:54 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:10.829 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:11.401 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:11.401 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:11.401 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:11.401 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:11.401 ************************************ 00:12:11.401 END TEST sw_hotplug 00:12:11.401 ************************************ 00:12:11.401 00:12:11.401 real 2m30.065s 00:12:11.401 user 1m51.479s 00:12:11.401 sys 0m17.087s 00:12:11.401 01:37:55 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.401 01:37:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.401 01:37:55 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:11.401 01:37:55 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:11.401 01:37:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:11.401 01:37:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.401 01:37:55 -- common/autotest_common.sh@10 -- # set +x 00:12:11.401 ************************************ 00:12:11.401 START TEST nvme_xnvme 00:12:11.401 ************************************ 00:12:11.401 01:37:55 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:11.666 * Looking for test storage... 00:12:11.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:11.666 01:37:55 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:11.666 01:37:55 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:11.666 01:37:55 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:11.666 01:37:55 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:11.666 01:37:55 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:11.666 01:37:55 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.666 01:37:55 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:11.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.666 --rc genhtml_branch_coverage=1 00:12:11.666 --rc genhtml_function_coverage=1 00:12:11.666 --rc genhtml_legend=1 00:12:11.666 --rc geninfo_all_blocks=1 00:12:11.666 --rc geninfo_unexecuted_blocks=1 00:12:11.666 00:12:11.666 ' 00:12:11.666 01:37:55 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:11.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.666 --rc genhtml_branch_coverage=1 00:12:11.666 --rc genhtml_function_coverage=1 00:12:11.666 --rc genhtml_legend=1 00:12:11.666 --rc geninfo_all_blocks=1 00:12:11.666 --rc geninfo_unexecuted_blocks=1 00:12:11.666 00:12:11.666 ' 00:12:11.666 01:37:55 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:11.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.666 --rc genhtml_branch_coverage=1 00:12:11.666 --rc genhtml_function_coverage=1 00:12:11.666 --rc genhtml_legend=1 00:12:11.666 --rc geninfo_all_blocks=1 00:12:11.666 --rc geninfo_unexecuted_blocks=1 00:12:11.666 00:12:11.666 ' 00:12:11.666 01:37:55 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:11.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.666 --rc genhtml_branch_coverage=1 00:12:11.666 --rc genhtml_function_coverage=1 00:12:11.666 --rc genhtml_legend=1 00:12:11.666 --rc geninfo_all_blocks=1 00:12:11.666 --rc geninfo_unexecuted_blocks=1 00:12:11.666 00:12:11.666 ' 00:12:11.666 01:37:55 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:11.666 01:37:55 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:11.666 01:37:55 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:11.666 01:37:55 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:11.666 01:37:55 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:11.666 01:37:55 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:11.666 01:37:55 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:11.666 01:37:55 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:11.666 01:37:55 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:11.666 01:37:55 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:11.666 01:37:55 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:11.666 01:37:55 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:11.666 01:37:55 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:11.666 01:37:55 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:11.666 01:37:55 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:11.666 01:37:55 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:11.666 01:37:55 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:11.666 01:37:55 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:11.666 01:37:55 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:11.666 01:37:55 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:11.666 01:37:55 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:11.666 01:37:55 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:11.666 01:37:55 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:11.667 01:37:55 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:11.667 01:37:55 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:11.667 01:37:55 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:11.667 01:37:55 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:11.667 01:37:55 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:11.667 01:37:55 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:11.667 01:37:55 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:11.667 01:37:55 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:11.667 01:37:55 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:11.667 01:37:55 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:11.667 01:37:55 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:11.667 01:37:55 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:11.667 01:37:55 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:11.667 01:37:55 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:11.667 01:37:55 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:11.667 01:37:55 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:11.667 01:37:55 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:11.667 #define SPDK_CONFIG_H 00:12:11.667 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:11.667 #define SPDK_CONFIG_APPS 1 00:12:11.667 #define SPDK_CONFIG_ARCH native 00:12:11.667 #define SPDK_CONFIG_ASAN 1 00:12:11.667 #undef SPDK_CONFIG_AVAHI 00:12:11.667 #undef SPDK_CONFIG_CET 00:12:11.667 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:11.667 #define SPDK_CONFIG_COVERAGE 1 00:12:11.667 #define SPDK_CONFIG_CROSS_PREFIX 00:12:11.667 #undef SPDK_CONFIG_CRYPTO 00:12:11.667 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:11.667 #undef SPDK_CONFIG_CUSTOMOCF 00:12:11.667 #undef SPDK_CONFIG_DAOS 00:12:11.667 #define SPDK_CONFIG_DAOS_DIR 00:12:11.667 #define SPDK_CONFIG_DEBUG 1 00:12:11.667 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:11.667 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:11.667 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:11.667 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:11.667 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:11.667 #undef SPDK_CONFIG_DPDK_UADK 00:12:11.667 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:11.667 #define SPDK_CONFIG_EXAMPLES 1 00:12:11.667 #undef SPDK_CONFIG_FC 00:12:11.667 #define SPDK_CONFIG_FC_PATH 00:12:11.667 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:11.667 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:11.667 #define SPDK_CONFIG_FSDEV 1 00:12:11.667 #undef SPDK_CONFIG_FUSE 00:12:11.667 #undef SPDK_CONFIG_FUZZER 00:12:11.667 #define SPDK_CONFIG_FUZZER_LIB 00:12:11.667 #undef SPDK_CONFIG_GOLANG 00:12:11.667 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:11.667 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:11.667 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:11.667 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:11.667 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:11.667 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:11.667 #undef SPDK_CONFIG_HAVE_LZ4 00:12:11.667 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:11.667 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:11.668 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:11.668 #define SPDK_CONFIG_IDXD 1 00:12:11.668 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:11.668 #undef SPDK_CONFIG_IPSEC_MB 00:12:11.668 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:11.668 #define SPDK_CONFIG_ISAL 1 00:12:11.668 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:11.668 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:11.668 #define SPDK_CONFIG_LIBDIR 00:12:11.668 #undef SPDK_CONFIG_LTO 00:12:11.668 #define SPDK_CONFIG_MAX_LCORES 128 00:12:11.668 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:11.668 #define SPDK_CONFIG_NVME_CUSE 1 00:12:11.668 #undef SPDK_CONFIG_OCF 00:12:11.668 #define SPDK_CONFIG_OCF_PATH 00:12:11.668 #define SPDK_CONFIG_OPENSSL_PATH 00:12:11.668 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:11.668 #define SPDK_CONFIG_PGO_DIR 00:12:11.668 #undef SPDK_CONFIG_PGO_USE 00:12:11.668 #define SPDK_CONFIG_PREFIX /usr/local 00:12:11.668 #undef SPDK_CONFIG_RAID5F 00:12:11.668 #undef SPDK_CONFIG_RBD 00:12:11.668 #define SPDK_CONFIG_RDMA 1 00:12:11.668 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:11.668 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:11.668 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:11.668 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:11.668 #define SPDK_CONFIG_SHARED 1 00:12:11.668 #undef SPDK_CONFIG_SMA 00:12:11.668 #define SPDK_CONFIG_TESTS 1 00:12:11.668 #undef SPDK_CONFIG_TSAN 00:12:11.668 #define SPDK_CONFIG_UBLK 1 00:12:11.668 #define SPDK_CONFIG_UBSAN 1 00:12:11.668 #undef SPDK_CONFIG_UNIT_TESTS 00:12:11.668 #undef SPDK_CONFIG_URING 00:12:11.668 #define SPDK_CONFIG_URING_PATH 00:12:11.668 #undef SPDK_CONFIG_URING_ZNS 00:12:11.668 #undef SPDK_CONFIG_USDT 00:12:11.668 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:11.668 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:11.668 #undef SPDK_CONFIG_VFIO_USER 00:12:11.668 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:11.668 #define SPDK_CONFIG_VHOST 1 00:12:11.668 #define SPDK_CONFIG_VIRTIO 1 00:12:11.668 #undef SPDK_CONFIG_VTUNE 00:12:11.668 #define SPDK_CONFIG_VTUNE_DIR 00:12:11.668 #define SPDK_CONFIG_WERROR 1 00:12:11.668 #define SPDK_CONFIG_WPDK_DIR 00:12:11.668 #define SPDK_CONFIG_XNVME 1 00:12:11.668 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:11.668 01:37:55 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:11.668 01:37:55 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:11.668 01:37:55 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.668 01:37:55 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.668 01:37:55 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.668 01:37:55 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.668 01:37:55 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.668 01:37:55 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.668 01:37:55 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:11.668 01:37:55 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:11.668 01:37:55 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:11.668 01:37:55 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:11.669 01:37:55 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68514 ]] 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68514 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.4LVnwL 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.4LVnwL/tests/xnvme /tmp/spdk.4LVnwL 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13977239552 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5590810624 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260625408 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13977239552 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5590810624 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98881437696 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=821342208 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:11.670 * Looking for test storage... 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13977239552 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:11.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:11.670 01:37:55 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:11.671 01:37:55 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:11.671 01:37:55 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:11.671 01:37:55 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:11.671 01:37:55 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:11.933 01:37:55 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:11.933 01:37:55 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.933 01:37:55 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:11.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.933 --rc genhtml_branch_coverage=1 00:12:11.933 --rc genhtml_function_coverage=1 00:12:11.933 --rc genhtml_legend=1 00:12:11.933 --rc geninfo_all_blocks=1 00:12:11.933 --rc geninfo_unexecuted_blocks=1 00:12:11.933 00:12:11.933 ' 00:12:11.933 01:37:55 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:11.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.933 --rc genhtml_branch_coverage=1 00:12:11.933 --rc genhtml_function_coverage=1 00:12:11.933 --rc genhtml_legend=1 00:12:11.933 --rc geninfo_all_blocks=1 00:12:11.933 --rc geninfo_unexecuted_blocks=1 00:12:11.933 00:12:11.933 ' 00:12:11.933 01:37:55 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:11.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.933 --rc genhtml_branch_coverage=1 00:12:11.933 --rc genhtml_function_coverage=1 00:12:11.933 --rc genhtml_legend=1 00:12:11.933 --rc geninfo_all_blocks=1 00:12:11.933 --rc geninfo_unexecuted_blocks=1 00:12:11.933 00:12:11.933 ' 00:12:11.933 01:37:55 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:11.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.933 --rc genhtml_branch_coverage=1 00:12:11.933 --rc genhtml_function_coverage=1 00:12:11.933 --rc genhtml_legend=1 00:12:11.933 --rc geninfo_all_blocks=1 00:12:11.933 --rc geninfo_unexecuted_blocks=1 00:12:11.933 00:12:11.933 ' 00:12:11.933 01:37:55 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.933 01:37:55 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.933 01:37:55 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.933 01:37:55 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.933 01:37:55 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.933 01:37:55 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:11.933 01:37:55 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:11.933 01:37:55 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:12.195 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:12.195 Waiting for block devices as requested 00:12:12.456 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:12.456 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:12.456 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:12.456 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:17.750 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:17.750 01:38:01 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:18.011 01:38:01 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:18.011 01:38:01 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:18.273 01:38:02 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:18.273 01:38:02 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:18.273 01:38:02 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:18.273 01:38:02 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:18.273 01:38:02 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:18.273 No valid GPT data, bailing 00:12:18.273 01:38:02 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:18.273 01:38:02 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:18.273 01:38:02 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:18.273 01:38:02 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:18.273 01:38:02 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:18.273 01:38:02 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:18.273 01:38:02 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:18.273 01:38:02 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:18.273 01:38:02 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:18.273 01:38:02 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:18.273 01:38:02 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:18.273 01:38:02 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:18.273 01:38:02 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:18.273 01:38:02 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:18.273 01:38:02 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:18.273 01:38:02 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:18.273 01:38:02 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:18.273 01:38:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:18.273 01:38:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.273 01:38:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:18.273 ************************************ 00:12:18.273 START TEST xnvme_rpc 00:12:18.273 ************************************ 00:12:18.273 01:38:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:18.273 01:38:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:18.273 01:38:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:18.273 01:38:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:18.273 01:38:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:18.273 01:38:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=68900 00:12:18.273 01:38:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 68900 00:12:18.273 01:38:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 68900 ']' 00:12:18.273 01:38:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.273 01:38:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:18.273 01:38:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.273 01:38:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.273 01:38:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.273 01:38:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.535 [2024-11-21 01:38:02.277040] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:12:18.535 [2024-11-21 01:38:02.277193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68900 ] 00:12:18.535 [2024-11-21 01:38:02.443094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.796 [2024-11-21 01:38:02.566568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.369 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.369 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:19.369 01:38:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:19.369 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.369 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.369 xnvme_bdev 00:12:19.369 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.369 01:38:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:19.369 01:38:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:19.369 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.369 01:38:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:19.369 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.369 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.369 01:38:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:19.369 01:38:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:19.369 01:38:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:19.369 01:38:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:19.369 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.369 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.369 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 68900 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 68900 ']' 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 68900 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68900 00:12:19.631 killing process with pid 68900 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:19.631 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:19.632 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68900' 00:12:19.632 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 68900 00:12:19.632 01:38:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 68900 00:12:21.551 00:12:21.551 real 0m2.879s 00:12:21.551 user 0m2.853s 00:12:21.551 sys 0m0.464s 00:12:21.551 01:38:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.551 01:38:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.551 ************************************ 00:12:21.551 END TEST xnvme_rpc 00:12:21.551 ************************************ 00:12:21.551 01:38:05 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:21.551 01:38:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:21.551 01:38:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.551 01:38:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:21.551 ************************************ 00:12:21.551 START TEST xnvme_bdevperf 00:12:21.551 ************************************ 00:12:21.551 01:38:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:21.551 01:38:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:21.551 01:38:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:21.551 01:38:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:21.551 01:38:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:21.551 01:38:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:21.551 01:38:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:21.551 01:38:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:21.551 { 00:12:21.551 "subsystems": [ 00:12:21.551 { 00:12:21.551 "subsystem": "bdev", 00:12:21.551 "config": [ 00:12:21.551 { 00:12:21.551 "params": { 00:12:21.551 "io_mechanism": "libaio", 00:12:21.551 "conserve_cpu": false, 00:12:21.551 "filename": "/dev/nvme0n1", 00:12:21.551 "name": "xnvme_bdev" 00:12:21.551 }, 00:12:21.551 "method": "bdev_xnvme_create" 00:12:21.551 }, 00:12:21.551 { 00:12:21.551 "method": "bdev_wait_for_examine" 00:12:21.551 } 00:12:21.551 ] 00:12:21.551 } 00:12:21.551 ] 00:12:21.551 } 00:12:21.551 [2024-11-21 01:38:05.199761] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:12:21.551 [2024-11-21 01:38:05.199858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68968 ] 00:12:21.551 [2024-11-21 01:38:05.346883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.551 [2024-11-21 01:38:05.422707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.812 Running I/O for 5 seconds... 00:12:23.765 32005.00 IOPS, 125.02 MiB/s [2024-11-21T01:38:08.667Z] 30057.50 IOPS, 117.41 MiB/s [2024-11-21T01:38:10.055Z] 30295.00 IOPS, 118.34 MiB/s [2024-11-21T01:38:10.999Z] 31517.75 IOPS, 123.12 MiB/s [2024-11-21T01:38:10.999Z] 31037.40 IOPS, 121.24 MiB/s 00:12:27.042 Latency(us) 00:12:27.042 [2024-11-21T01:38:10.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:27.042 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:27.042 xnvme_bdev : 5.01 31013.77 121.15 0.00 0.00 2059.16 381.24 6856.07 00:12:27.042 [2024-11-21T01:38:10.999Z] =================================================================================================================== 00:12:27.042 [2024-11-21T01:38:10.999Z] Total : 31013.77 121.15 0.00 0.00 2059.16 381.24 6856.07 00:12:27.615 01:38:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:27.615 01:38:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:27.615 01:38:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:27.615 01:38:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:27.615 01:38:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:27.615 { 00:12:27.615 "subsystems": [ 00:12:27.615 { 00:12:27.615 "subsystem": "bdev", 00:12:27.615 "config": [ 00:12:27.615 { 00:12:27.615 "params": { 00:12:27.615 "io_mechanism": "libaio", 00:12:27.615 "conserve_cpu": false, 00:12:27.615 "filename": "/dev/nvme0n1", 00:12:27.615 "name": "xnvme_bdev" 00:12:27.615 }, 00:12:27.615 "method": "bdev_xnvme_create" 00:12:27.615 }, 00:12:27.615 { 00:12:27.615 "method": "bdev_wait_for_examine" 00:12:27.615 } 00:12:27.615 ] 00:12:27.615 } 00:12:27.615 ] 00:12:27.615 } 00:12:27.615 [2024-11-21 01:38:11.503465] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:12:27.615 [2024-11-21 01:38:11.503608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69049 ] 00:12:27.877 [2024-11-21 01:38:11.668752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.877 [2024-11-21 01:38:11.784848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.139 Running I/O for 5 seconds... 00:12:30.473 36355.00 IOPS, 142.01 MiB/s [2024-11-21T01:38:15.374Z] 35921.50 IOPS, 140.32 MiB/s [2024-11-21T01:38:16.318Z] 35005.00 IOPS, 136.74 MiB/s [2024-11-21T01:38:17.261Z] 34968.75 IOPS, 136.60 MiB/s [2024-11-21T01:38:17.261Z] 34609.00 IOPS, 135.19 MiB/s 00:12:33.304 Latency(us) 00:12:33.304 [2024-11-21T01:38:17.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.304 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:33.304 xnvme_bdev : 5.01 34564.82 135.02 0.00 0.00 1845.58 392.27 9225.45 00:12:33.304 [2024-11-21T01:38:17.261Z] =================================================================================================================== 00:12:33.304 [2024-11-21T01:38:17.261Z] Total : 34564.82 135.02 0.00 0.00 1845.58 392.27 9225.45 00:12:34.250 00:12:34.250 real 0m12.746s 00:12:34.250 user 0m5.108s 00:12:34.250 sys 0m6.058s 00:12:34.250 01:38:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:34.250 ************************************ 00:12:34.250 END TEST xnvme_bdevperf 00:12:34.250 ************************************ 00:12:34.250 01:38:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:34.250 01:38:17 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:34.250 01:38:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:34.250 01:38:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:34.250 01:38:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:34.250 ************************************ 00:12:34.250 START TEST xnvme_fio_plugin 00:12:34.250 ************************************ 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:34.250 01:38:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:34.250 { 00:12:34.250 "subsystems": [ 00:12:34.250 { 00:12:34.250 "subsystem": "bdev", 00:12:34.250 "config": [ 00:12:34.250 { 00:12:34.250 "params": { 00:12:34.250 "io_mechanism": "libaio", 00:12:34.250 "conserve_cpu": false, 00:12:34.250 "filename": "/dev/nvme0n1", 00:12:34.250 "name": "xnvme_bdev" 00:12:34.250 }, 00:12:34.250 "method": "bdev_xnvme_create" 00:12:34.250 }, 00:12:34.250 { 00:12:34.250 "method": "bdev_wait_for_examine" 00:12:34.250 } 00:12:34.250 ] 00:12:34.250 } 00:12:34.250 ] 00:12:34.250 } 00:12:34.250 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:34.250 fio-3.35 00:12:34.250 Starting 1 thread 00:12:40.841 00:12:40.841 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69163: Thu Nov 21 01:38:23 2024 00:12:40.841 read: IOPS=36.2k, BW=142MiB/s (148MB/s)(708MiB/5002msec) 00:12:40.841 slat (usec): min=4, max=2103, avg=16.84, stdev=84.07 00:12:40.841 clat (usec): min=106, max=5318, avg=1294.77, stdev=476.00 00:12:40.841 lat (usec): min=205, max=5323, avg=1311.61, stdev=468.62 00:12:40.841 clat percentiles (usec): 00:12:40.841 | 1.00th=[ 310], 5.00th=[ 570], 10.00th=[ 742], 20.00th=[ 914], 00:12:40.841 | 30.00th=[ 1037], 40.00th=[ 1156], 50.00th=[ 1270], 60.00th=[ 1385], 00:12:40.841 | 70.00th=[ 1500], 80.00th=[ 1647], 90.00th=[ 1876], 95.00th=[ 2073], 00:12:40.841 | 99.00th=[ 2671], 99.50th=[ 3032], 99.90th=[ 3621], 99.95th=[ 3982], 00:12:40.841 | 99.99th=[ 4817] 00:12:40.841 bw ( KiB/s): min=137448, max=155696, per=100.00%, avg=145919.33, stdev=5452.52, samples=9 00:12:40.841 iops : min=34362, max=38924, avg=36479.78, stdev=1363.17, samples=9 00:12:40.841 lat (usec) : 250=0.46%, 500=3.02%, 750=6.85%, 1000=16.68% 00:12:40.841 lat (msec) : 2=66.36%, 4=6.58%, 10=0.05% 00:12:40.841 cpu : usr=52.17%, sys=39.93%, ctx=15, majf=0, minf=764 00:12:40.841 IO depths : 1=0.7%, 2=1.6%, 4=3.6%, 8=8.7%, 16=22.4%, 32=60.9%, >=64=2.1% 00:12:40.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:40.841 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:40.841 issued rwts: total=181215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:40.841 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:40.841 00:12:40.841 Run status group 0 (all jobs): 00:12:40.841 READ: bw=142MiB/s (148MB/s), 142MiB/s-142MiB/s (148MB/s-148MB/s), io=708MiB (742MB), run=5002-5002msec 00:12:41.103 ----------------------------------------------------- 00:12:41.103 Suppressions used: 00:12:41.103 count bytes template 00:12:41.103 1 11 /usr/src/fio/parse.c 00:12:41.103 1 8 libtcmalloc_minimal.so 00:12:41.103 1 904 libcrypto.so 00:12:41.103 ----------------------------------------------------- 00:12:41.103 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:41.103 01:38:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:41.103 { 00:12:41.103 "subsystems": [ 00:12:41.103 { 00:12:41.103 "subsystem": "bdev", 00:12:41.103 "config": [ 00:12:41.103 { 00:12:41.103 "params": { 00:12:41.103 "io_mechanism": "libaio", 00:12:41.103 "conserve_cpu": false, 00:12:41.103 "filename": "/dev/nvme0n1", 00:12:41.103 "name": "xnvme_bdev" 00:12:41.103 }, 00:12:41.103 "method": "bdev_xnvme_create" 00:12:41.103 }, 00:12:41.103 { 00:12:41.103 "method": "bdev_wait_for_examine" 00:12:41.103 } 00:12:41.103 ] 00:12:41.103 } 00:12:41.103 ] 00:12:41.103 } 00:12:41.103 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:41.103 fio-3.35 00:12:41.103 Starting 1 thread 00:12:47.699 00:12:47.699 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69254: Thu Nov 21 01:38:30 2024 00:12:47.699 write: IOPS=38.8k, BW=151MiB/s (159MB/s)(758MiB/5001msec); 0 zone resets 00:12:47.699 slat (usec): min=4, max=2064, avg=17.49, stdev=74.91 00:12:47.699 clat (usec): min=97, max=5003, avg=1166.71, stdev=490.24 00:12:47.699 lat (usec): min=193, max=5087, avg=1184.20, stdev=485.41 00:12:47.699 clat percentiles (usec): 00:12:47.699 | 1.00th=[ 285], 5.00th=[ 465], 10.00th=[ 611], 20.00th=[ 783], 00:12:47.699 | 30.00th=[ 898], 40.00th=[ 996], 50.00th=[ 1106], 60.00th=[ 1221], 00:12:47.699 | 70.00th=[ 1352], 80.00th=[ 1532], 90.00th=[ 1795], 95.00th=[ 2024], 00:12:47.699 | 99.00th=[ 2671], 99.50th=[ 3032], 99.90th=[ 3687], 99.95th=[ 3884], 00:12:47.699 | 99.99th=[ 4424] 00:12:47.699 bw ( KiB/s): min=141488, max=160832, per=98.74%, avg=153171.56, stdev=7586.27, samples=9 00:12:47.699 iops : min=35372, max=40208, avg=38292.89, stdev=1896.57, samples=9 00:12:47.699 lat (usec) : 100=0.01%, 250=0.66%, 500=5.37%, 750=11.60%, 1000=22.64% 00:12:47.699 lat (msec) : 2=54.33%, 4=5.37%, 10=0.03% 00:12:47.699 cpu : usr=45.30%, sys=44.42%, ctx=13, majf=0, minf=764 00:12:47.699 IO depths : 1=0.5%, 2=1.2%, 4=3.1%, 8=8.4%, 16=22.9%, 32=61.6%, >=64=2.2% 00:12:47.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.699 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:47.699 issued rwts: total=0,193942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.699 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:47.699 00:12:47.699 Run status group 0 (all jobs): 00:12:47.699 WRITE: bw=151MiB/s (159MB/s), 151MiB/s-151MiB/s (159MB/s-159MB/s), io=758MiB (794MB), run=5001-5001msec 00:12:47.961 ----------------------------------------------------- 00:12:47.961 Suppressions used: 00:12:47.961 count bytes template 00:12:47.961 1 11 /usr/src/fio/parse.c 00:12:47.961 1 8 libtcmalloc_minimal.so 00:12:47.961 1 904 libcrypto.so 00:12:47.961 ----------------------------------------------------- 00:12:47.961 00:12:47.961 00:12:47.961 real 0m13.817s 00:12:47.961 user 0m7.641s 00:12:47.961 sys 0m4.869s 00:12:47.961 01:38:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.961 ************************************ 00:12:47.961 END TEST xnvme_fio_plugin 00:12:47.961 ************************************ 00:12:47.961 01:38:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:47.961 01:38:31 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:47.961 01:38:31 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:47.961 01:38:31 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:47.961 01:38:31 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:47.961 01:38:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:47.961 01:38:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.961 01:38:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:47.961 ************************************ 00:12:47.961 START TEST xnvme_rpc 00:12:47.961 ************************************ 00:12:47.961 01:38:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:47.961 01:38:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:47.961 01:38:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:47.961 01:38:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:47.961 01:38:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:47.961 01:38:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69341 00:12:47.961 01:38:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69341 00:12:47.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.961 01:38:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69341 ']' 00:12:47.961 01:38:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.961 01:38:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.961 01:38:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.962 01:38:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.962 01:38:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.962 01:38:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:48.224 [2024-11-21 01:38:31.921216] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:12:48.224 [2024-11-21 01:38:31.922114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69341 ] 00:12:48.224 [2024-11-21 01:38:32.084794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.485 [2024-11-21 01:38:32.206572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.060 xnvme_bdev 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:49.060 01:38:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.060 01:38:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:49.060 01:38:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69341 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69341 ']' 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69341 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69341 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:49.322 killing process with pid 69341 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69341' 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69341 00:12:49.322 01:38:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69341 00:12:50.706 00:12:50.706 real 0m2.789s 00:12:50.706 user 0m2.780s 00:12:50.706 sys 0m0.452s 00:12:50.706 01:38:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.706 ************************************ 00:12:50.706 END TEST xnvme_rpc 00:12:50.706 01:38:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.706 ************************************ 00:12:50.967 01:38:34 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:50.967 01:38:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:50.967 01:38:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.967 01:38:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.967 ************************************ 00:12:50.967 START TEST xnvme_bdevperf 00:12:50.967 ************************************ 00:12:50.967 01:38:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:50.967 01:38:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:50.967 01:38:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:50.967 01:38:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:50.967 01:38:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:50.967 01:38:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:50.967 01:38:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:50.967 01:38:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:50.967 { 00:12:50.967 "subsystems": [ 00:12:50.967 { 00:12:50.967 "subsystem": "bdev", 00:12:50.967 "config": [ 00:12:50.967 { 00:12:50.967 "params": { 00:12:50.967 "io_mechanism": "libaio", 00:12:50.967 "conserve_cpu": true, 00:12:50.967 "filename": "/dev/nvme0n1", 00:12:50.967 "name": "xnvme_bdev" 00:12:50.967 }, 00:12:50.967 "method": "bdev_xnvme_create" 00:12:50.967 }, 00:12:50.967 { 00:12:50.967 "method": "bdev_wait_for_examine" 00:12:50.967 } 00:12:50.967 ] 00:12:50.967 } 00:12:50.967 ] 00:12:50.967 } 00:12:50.967 [2024-11-21 01:38:34.741718] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:12:50.967 [2024-11-21 01:38:34.741834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69409 ] 00:12:50.967 [2024-11-21 01:38:34.899596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.227 [2024-11-21 01:38:34.994841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.487 Running I/O for 5 seconds... 00:12:53.368 36314.00 IOPS, 141.85 MiB/s [2024-11-21T01:38:38.267Z] 38209.00 IOPS, 149.25 MiB/s [2024-11-21T01:38:39.651Z] 37893.67 IOPS, 148.02 MiB/s [2024-11-21T01:38:40.592Z] 37120.25 IOPS, 145.00 MiB/s 00:12:56.635 Latency(us) 00:12:56.635 [2024-11-21T01:38:40.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.635 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:56.635 xnvme_bdev : 5.00 37085.68 144.87 0.00 0.00 1721.53 220.55 11393.18 00:12:56.635 [2024-11-21T01:38:40.592Z] =================================================================================================================== 00:12:56.635 [2024-11-21T01:38:40.592Z] Total : 37085.68 144.87 0.00 0.00 1721.53 220.55 11393.18 00:12:57.209 01:38:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:57.209 01:38:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:57.209 01:38:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:57.209 01:38:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:57.209 01:38:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:57.209 { 00:12:57.209 "subsystems": [ 00:12:57.209 { 00:12:57.209 "subsystem": "bdev", 00:12:57.209 "config": [ 00:12:57.209 { 00:12:57.209 "params": { 00:12:57.209 "io_mechanism": "libaio", 00:12:57.209 "conserve_cpu": true, 00:12:57.209 "filename": "/dev/nvme0n1", 00:12:57.209 "name": "xnvme_bdev" 00:12:57.209 }, 00:12:57.209 "method": "bdev_xnvme_create" 00:12:57.209 }, 00:12:57.209 { 00:12:57.209 "method": "bdev_wait_for_examine" 00:12:57.209 } 00:12:57.209 ] 00:12:57.209 } 00:12:57.209 ] 00:12:57.209 } 00:12:57.209 [2024-11-21 01:38:41.109271] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:12:57.209 [2024-11-21 01:38:41.109411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69491 ] 00:12:57.470 [2024-11-21 01:38:41.274236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.470 [2024-11-21 01:38:41.390871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.042 Running I/O for 5 seconds... 00:13:00.000 40396.00 IOPS, 157.80 MiB/s [2024-11-21T01:38:44.902Z] 38273.50 IOPS, 149.51 MiB/s [2024-11-21T01:38:45.844Z] 37366.00 IOPS, 145.96 MiB/s [2024-11-21T01:38:46.788Z] 37325.00 IOPS, 145.80 MiB/s 00:13:02.831 Latency(us) 00:13:02.831 [2024-11-21T01:38:46.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.831 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:02.831 xnvme_bdev : 5.00 37178.17 145.23 0.00 0.00 1717.22 159.90 7410.61 00:13:02.831 [2024-11-21T01:38:46.788Z] =================================================================================================================== 00:13:02.831 [2024-11-21T01:38:46.788Z] Total : 37178.17 145.23 0.00 0.00 1717.22 159.90 7410.61 00:13:03.781 00:13:03.781 real 0m12.799s 00:13:03.781 user 0m5.027s 00:13:03.781 sys 0m6.028s 00:13:03.781 01:38:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.781 ************************************ 00:13:03.781 END TEST xnvme_bdevperf 00:13:03.781 ************************************ 00:13:03.781 01:38:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:03.781 01:38:47 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:03.782 01:38:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:03.782 01:38:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.782 01:38:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:03.782 ************************************ 00:13:03.782 START TEST xnvme_fio_plugin 00:13:03.782 ************************************ 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:03.782 01:38:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:03.782 { 00:13:03.782 "subsystems": [ 00:13:03.782 { 00:13:03.782 "subsystem": "bdev", 00:13:03.782 "config": [ 00:13:03.782 { 00:13:03.782 "params": { 00:13:03.782 "io_mechanism": "libaio", 00:13:03.782 "conserve_cpu": true, 00:13:03.782 "filename": "/dev/nvme0n1", 00:13:03.782 "name": "xnvme_bdev" 00:13:03.782 }, 00:13:03.782 "method": "bdev_xnvme_create" 00:13:03.782 }, 00:13:03.782 { 00:13:03.782 "method": "bdev_wait_for_examine" 00:13:03.782 } 00:13:03.782 ] 00:13:03.782 } 00:13:03.782 ] 00:13:03.782 } 00:13:04.043 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:04.043 fio-3.35 00:13:04.043 Starting 1 thread 00:13:10.635 00:13:10.635 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69610: Thu Nov 21 01:38:53 2024 00:13:10.635 read: IOPS=33.2k, BW=130MiB/s (136MB/s)(648MiB/5001msec) 00:13:10.635 slat (usec): min=4, max=1918, avg=22.95, stdev=96.46 00:13:10.635 clat (usec): min=107, max=5589, avg=1310.90, stdev=540.03 00:13:10.635 lat (usec): min=193, max=5678, avg=1333.86, stdev=532.31 00:13:10.635 clat percentiles (usec): 00:13:10.635 | 1.00th=[ 277], 5.00th=[ 523], 10.00th=[ 676], 20.00th=[ 881], 00:13:10.635 | 30.00th=[ 1029], 40.00th=[ 1156], 50.00th=[ 1254], 60.00th=[ 1385], 00:13:10.635 | 70.00th=[ 1516], 80.00th=[ 1696], 90.00th=[ 1958], 95.00th=[ 2245], 00:13:10.635 | 99.00th=[ 3032], 99.50th=[ 3326], 99.90th=[ 4047], 99.95th=[ 4359], 00:13:10.635 | 99.99th=[ 4948] 00:13:10.635 bw ( KiB/s): min=118992, max=147312, per=100.00%, avg=133071.11, stdev=9372.50, samples=9 00:13:10.635 iops : min=29748, max=36828, avg=33267.78, stdev=2343.12, samples=9 00:13:10.635 lat (usec) : 250=0.70%, 500=3.77%, 750=8.46%, 1000=15.00% 00:13:10.635 lat (msec) : 2=63.05%, 4=8.90%, 10=0.12% 00:13:10.635 cpu : usr=37.44%, sys=53.66%, ctx=11, majf=0, minf=764 00:13:10.635 IO depths : 1=0.4%, 2=1.0%, 4=2.8%, 8=8.2%, 16=23.5%, 32=62.0%, >=64=2.1% 00:13:10.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.635 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:13:10.635 issued rwts: total=166011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:10.635 00:13:10.635 Run status group 0 (all jobs): 00:13:10.635 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=648MiB (680MB), run=5001-5001msec 00:13:10.635 ----------------------------------------------------- 00:13:10.635 Suppressions used: 00:13:10.635 count bytes template 00:13:10.635 1 11 /usr/src/fio/parse.c 00:13:10.635 1 8 libtcmalloc_minimal.so 00:13:10.635 1 904 libcrypto.so 00:13:10.635 ----------------------------------------------------- 00:13:10.635 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:10.635 01:38:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:10.635 { 00:13:10.635 "subsystems": [ 00:13:10.635 { 00:13:10.635 "subsystem": "bdev", 00:13:10.635 "config": [ 00:13:10.635 { 00:13:10.635 "params": { 00:13:10.635 "io_mechanism": "libaio", 00:13:10.635 "conserve_cpu": true, 00:13:10.635 "filename": "/dev/nvme0n1", 00:13:10.635 "name": "xnvme_bdev" 00:13:10.635 }, 00:13:10.635 "method": "bdev_xnvme_create" 00:13:10.635 }, 00:13:10.635 { 00:13:10.635 "method": "bdev_wait_for_examine" 00:13:10.635 } 00:13:10.635 ] 00:13:10.635 } 00:13:10.635 ] 00:13:10.635 } 00:13:10.898 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:10.898 fio-3.35 00:13:10.898 Starting 1 thread 00:13:17.489 00:13:17.489 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69697: Thu Nov 21 01:39:00 2024 00:13:17.489 write: IOPS=37.0k, BW=144MiB/s (151MB/s)(723MiB/5003msec); 0 zone resets 00:13:17.489 slat (usec): min=4, max=2031, avg=18.63, stdev=83.60 00:13:17.489 clat (usec): min=106, max=6265, avg=1219.86, stdev=483.36 00:13:17.489 lat (usec): min=193, max=6269, avg=1238.49, stdev=476.77 00:13:17.489 clat percentiles (usec): 00:13:17.489 | 1.00th=[ 289], 5.00th=[ 519], 10.00th=[ 676], 20.00th=[ 840], 00:13:17.489 | 30.00th=[ 963], 40.00th=[ 1057], 50.00th=[ 1172], 60.00th=[ 1287], 00:13:17.489 | 70.00th=[ 1401], 80.00th=[ 1565], 90.00th=[ 1811], 95.00th=[ 2073], 00:13:17.489 | 99.00th=[ 2737], 99.50th=[ 3064], 99.90th=[ 3785], 99.95th=[ 4047], 00:13:17.489 | 99.99th=[ 4359] 00:13:17.489 bw ( KiB/s): min=137859, max=165210, per=100.00%, avg=148070.40, stdev=8957.84, samples=10 00:13:17.489 iops : min=34464, max=41302, avg=37017.40, stdev=2239.42, samples=10 00:13:17.489 lat (usec) : 250=0.58%, 500=3.96%, 750=9.00%, 1000=20.24% 00:13:17.489 lat (msec) : 2=60.26%, 4=5.91%, 10=0.06% 00:13:17.489 cpu : usr=44.60%, sys=46.90%, ctx=58, majf=0, minf=764 00:13:17.489 IO depths : 1=0.6%, 2=1.3%, 4=3.2%, 8=8.3%, 16=22.5%, 32=62.0%, >=64=2.1% 00:13:17.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.489 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:17.489 issued rwts: total=0,185007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.489 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:17.489 00:13:17.489 Run status group 0 (all jobs): 00:13:17.489 WRITE: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=723MiB (758MB), run=5003-5003msec 00:13:17.751 ----------------------------------------------------- 00:13:17.751 Suppressions used: 00:13:17.751 count bytes template 00:13:17.751 1 11 /usr/src/fio/parse.c 00:13:17.751 1 8 libtcmalloc_minimal.so 00:13:17.751 1 904 libcrypto.so 00:13:17.751 ----------------------------------------------------- 00:13:17.751 00:13:17.751 00:13:17.751 real 0m14.005s 00:13:17.751 user 0m7.036s 00:13:17.751 sys 0m5.703s 00:13:17.751 01:39:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.751 ************************************ 00:13:17.751 01:39:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:17.751 END TEST xnvme_fio_plugin 00:13:17.751 ************************************ 00:13:17.751 01:39:01 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:17.751 01:39:01 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:17.751 01:39:01 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:17.751 01:39:01 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:17.751 01:39:01 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:17.751 01:39:01 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:17.751 01:39:01 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:17.751 01:39:01 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:17.751 01:39:01 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:17.751 01:39:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:17.751 01:39:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:17.751 01:39:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.751 ************************************ 00:13:17.751 START TEST xnvme_rpc 00:13:17.751 ************************************ 00:13:17.751 01:39:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:17.751 01:39:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:17.751 01:39:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:17.751 01:39:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:17.751 01:39:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:17.751 01:39:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69789 00:13:17.751 01:39:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69789 00:13:17.751 01:39:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69789 ']' 00:13:17.751 01:39:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.751 01:39:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.751 01:39:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.751 01:39:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:17.751 01:39:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.751 01:39:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.012 [2024-11-21 01:39:01.719427] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:13:18.012 [2024-11-21 01:39:01.719592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69789 ] 00:13:18.012 [2024-11-21 01:39:01.886500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.273 [2024-11-21 01:39:02.032533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.214 xnvme_bdev 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69789 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69789 ']' 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69789 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:19.214 01:39:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69789 00:13:19.214 01:39:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:19.214 01:39:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:19.214 killing process with pid 69789 00:13:19.214 01:39:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69789' 00:13:19.214 01:39:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69789 00:13:19.214 01:39:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69789 00:13:21.129 00:13:21.129 real 0m3.065s 00:13:21.129 user 0m2.953s 00:13:21.129 sys 0m0.589s 00:13:21.129 01:39:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.129 ************************************ 00:13:21.129 END TEST xnvme_rpc 00:13:21.129 ************************************ 00:13:21.129 01:39:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.129 01:39:04 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:21.129 01:39:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:21.129 01:39:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.129 01:39:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:21.129 ************************************ 00:13:21.129 START TEST xnvme_bdevperf 00:13:21.129 ************************************ 00:13:21.129 01:39:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:21.129 01:39:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:21.129 01:39:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:21.129 01:39:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:21.129 01:39:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:21.129 01:39:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:21.129 01:39:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:21.129 01:39:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:21.129 { 00:13:21.129 "subsystems": [ 00:13:21.129 { 00:13:21.129 "subsystem": "bdev", 00:13:21.129 "config": [ 00:13:21.129 { 00:13:21.129 "params": { 00:13:21.129 "io_mechanism": "io_uring", 00:13:21.129 "conserve_cpu": false, 00:13:21.129 "filename": "/dev/nvme0n1", 00:13:21.129 "name": "xnvme_bdev" 00:13:21.129 }, 00:13:21.129 "method": "bdev_xnvme_create" 00:13:21.129 }, 00:13:21.129 { 00:13:21.129 "method": "bdev_wait_for_examine" 00:13:21.129 } 00:13:21.129 ] 00:13:21.129 } 00:13:21.129 ] 00:13:21.129 } 00:13:21.129 [2024-11-21 01:39:04.831273] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:13:21.129 [2024-11-21 01:39:04.831438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69862 ] 00:13:21.129 [2024-11-21 01:39:04.998918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.391 [2024-11-21 01:39:05.116558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.652 Running I/O for 5 seconds... 00:13:23.539 32028.00 IOPS, 125.11 MiB/s [2024-11-21T01:39:08.437Z] 32242.50 IOPS, 125.95 MiB/s [2024-11-21T01:39:09.839Z] 32511.67 IOPS, 127.00 MiB/s [2024-11-21T01:39:10.412Z] 32562.50 IOPS, 127.20 MiB/s 00:13:26.455 Latency(us) 00:13:26.455 [2024-11-21T01:39:10.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.455 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:26.455 xnvme_bdev : 5.00 32560.32 127.19 0.00 0.00 1961.57 488.37 7763.50 00:13:26.455 [2024-11-21T01:39:10.412Z] =================================================================================================================== 00:13:26.455 [2024-11-21T01:39:10.412Z] Total : 32560.32 127.19 0.00 0.00 1961.57 488.37 7763.50 00:13:27.401 01:39:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:27.401 01:39:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:27.401 01:39:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:27.401 01:39:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:27.401 01:39:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:27.401 { 00:13:27.401 "subsystems": [ 00:13:27.401 { 00:13:27.401 "subsystem": "bdev", 00:13:27.401 "config": [ 00:13:27.401 { 00:13:27.401 "params": { 00:13:27.401 "io_mechanism": "io_uring", 00:13:27.401 "conserve_cpu": false, 00:13:27.401 "filename": "/dev/nvme0n1", 00:13:27.401 "name": "xnvme_bdev" 00:13:27.401 }, 00:13:27.401 "method": "bdev_xnvme_create" 00:13:27.401 }, 00:13:27.401 { 00:13:27.401 "method": "bdev_wait_for_examine" 00:13:27.401 } 00:13:27.401 ] 00:13:27.401 } 00:13:27.401 ] 00:13:27.401 } 00:13:27.401 [2024-11-21 01:39:11.252561] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:13:27.401 [2024-11-21 01:39:11.252736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69933 ] 00:13:27.662 [2024-11-21 01:39:11.418192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.662 [2024-11-21 01:39:11.566155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.923 Running I/O for 5 seconds... 00:13:30.256 35395.00 IOPS, 138.26 MiB/s [2024-11-21T01:39:15.155Z] 34827.50 IOPS, 136.04 MiB/s [2024-11-21T01:39:16.097Z] 35307.00 IOPS, 137.92 MiB/s [2024-11-21T01:39:17.039Z] 35405.50 IOPS, 138.30 MiB/s 00:13:33.082 Latency(us) 00:13:33.082 [2024-11-21T01:39:17.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.082 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:33.082 xnvme_bdev : 5.00 35612.64 139.11 0.00 0.00 1792.97 365.49 7763.50 00:13:33.082 [2024-11-21T01:39:17.039Z] =================================================================================================================== 00:13:33.082 [2024-11-21T01:39:17.039Z] Total : 35612.64 139.11 0.00 0.00 1792.97 365.49 7763.50 00:13:33.714 00:13:33.714 real 0m12.879s 00:13:33.714 user 0m5.956s 00:13:33.714 sys 0m6.638s 00:13:33.714 01:39:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.714 ************************************ 00:13:33.714 END TEST xnvme_bdevperf 00:13:33.714 ************************************ 00:13:33.714 01:39:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:33.981 01:39:17 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:33.981 01:39:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:33.981 01:39:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.981 01:39:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:33.981 ************************************ 00:13:33.981 START TEST xnvme_fio_plugin 00:13:33.981 ************************************ 00:13:33.981 01:39:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:33.981 01:39:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:33.981 01:39:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:33.982 01:39:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:33.982 { 00:13:33.982 "subsystems": [ 00:13:33.982 { 00:13:33.982 "subsystem": "bdev", 00:13:33.982 "config": [ 00:13:33.982 { 00:13:33.982 "params": { 00:13:33.982 "io_mechanism": "io_uring", 00:13:33.982 "conserve_cpu": false, 00:13:33.982 "filename": "/dev/nvme0n1", 00:13:33.982 "name": "xnvme_bdev" 00:13:33.982 }, 00:13:33.982 "method": "bdev_xnvme_create" 00:13:33.982 }, 00:13:33.982 { 00:13:33.982 "method": "bdev_wait_for_examine" 00:13:33.982 } 00:13:33.982 ] 00:13:33.982 } 00:13:33.982 ] 00:13:33.982 } 00:13:33.982 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:33.982 fio-3.35 00:13:33.982 Starting 1 thread 00:13:40.574 00:13:40.574 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70052: Thu Nov 21 01:39:23 2024 00:13:40.574 read: IOPS=32.1k, BW=126MiB/s (132MB/s)(628MiB/5002msec) 00:13:40.574 slat (nsec): min=2711, max=73014, avg=3459.37, stdev=2014.39 00:13:40.574 clat (usec): min=1052, max=3349, avg=1847.98, stdev=304.16 00:13:40.574 lat (usec): min=1054, max=3377, avg=1851.44, stdev=304.53 00:13:40.574 clat percentiles (usec): 00:13:40.574 | 1.00th=[ 1254], 5.00th=[ 1401], 10.00th=[ 1483], 20.00th=[ 1582], 00:13:40.574 | 30.00th=[ 1663], 40.00th=[ 1745], 50.00th=[ 1827], 60.00th=[ 1909], 00:13:40.574 | 70.00th=[ 1991], 80.00th=[ 2089], 90.00th=[ 2245], 95.00th=[ 2376], 00:13:40.574 | 99.00th=[ 2671], 99.50th=[ 2802], 99.90th=[ 3097], 99.95th=[ 3163], 00:13:40.574 | 99.99th=[ 3294] 00:13:40.574 bw ( KiB/s): min=124416, max=138986, per=100.00%, avg=128765.56, stdev=4277.13, samples=9 00:13:40.574 iops : min=31104, max=34746, avg=32191.33, stdev=1069.13, samples=9 00:13:40.574 lat (msec) : 2=71.15%, 4=28.85% 00:13:40.574 cpu : usr=31.95%, sys=66.79%, ctx=13, majf=0, minf=762 00:13:40.574 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:40.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.574 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:40.574 issued rwts: total=160768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.574 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:40.574 00:13:40.574 Run status group 0 (all jobs): 00:13:40.574 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=628MiB (659MB), run=5002-5002msec 00:13:40.835 ----------------------------------------------------- 00:13:40.835 Suppressions used: 00:13:40.835 count bytes template 00:13:40.835 1 11 /usr/src/fio/parse.c 00:13:40.835 1 8 libtcmalloc_minimal.so 00:13:40.835 1 904 libcrypto.so 00:13:40.835 ----------------------------------------------------- 00:13:40.835 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:40.835 01:39:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:40.835 { 00:13:40.835 "subsystems": [ 00:13:40.835 { 00:13:40.835 "subsystem": "bdev", 00:13:40.835 "config": [ 00:13:40.835 { 00:13:40.835 "params": { 00:13:40.835 "io_mechanism": "io_uring", 00:13:40.835 "conserve_cpu": false, 00:13:40.835 "filename": "/dev/nvme0n1", 00:13:40.835 "name": "xnvme_bdev" 00:13:40.835 }, 00:13:40.835 "method": "bdev_xnvme_create" 00:13:40.835 }, 00:13:40.835 { 00:13:40.835 "method": "bdev_wait_for_examine" 00:13:40.835 } 00:13:40.835 ] 00:13:40.835 } 00:13:40.835 ] 00:13:40.835 } 00:13:41.096 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:41.096 fio-3.35 00:13:41.096 Starting 1 thread 00:13:47.687 00:13:47.687 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70144: Thu Nov 21 01:39:30 2024 00:13:47.687 write: IOPS=33.6k, BW=131MiB/s (138MB/s)(657MiB/5001msec); 0 zone resets 00:13:47.687 slat (usec): min=2, max=265, avg= 3.64, stdev= 2.55 00:13:47.687 clat (usec): min=365, max=4411, avg=1755.16, stdev=275.65 00:13:47.687 lat (usec): min=368, max=4416, avg=1758.80, stdev=276.03 00:13:47.687 clat percentiles (usec): 00:13:47.687 | 1.00th=[ 1287], 5.00th=[ 1401], 10.00th=[ 1467], 20.00th=[ 1532], 00:13:47.687 | 30.00th=[ 1598], 40.00th=[ 1647], 50.00th=[ 1713], 60.00th=[ 1778], 00:13:47.687 | 70.00th=[ 1844], 80.00th=[ 1942], 90.00th=[ 2114], 95.00th=[ 2278], 00:13:47.687 | 99.00th=[ 2606], 99.50th=[ 2737], 99.90th=[ 3195], 99.95th=[ 3490], 00:13:47.687 | 99.99th=[ 4228] 00:13:47.687 bw ( KiB/s): min=129016, max=142280, per=100.00%, avg=135118.22, stdev=4641.94, samples=9 00:13:47.687 iops : min=32254, max=35570, avg=33779.56, stdev=1160.48, samples=9 00:13:47.687 lat (usec) : 500=0.01% 00:13:47.687 lat (msec) : 2=83.86%, 4=16.11%, 10=0.02% 00:13:47.687 cpu : usr=32.80%, sys=65.36%, ctx=30, majf=0, minf=762 00:13:47.687 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=24.9%, 32=50.2%, >=64=1.6% 00:13:47.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.687 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:47.687 issued rwts: total=0,168096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.687 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:47.687 00:13:47.687 Run status group 0 (all jobs): 00:13:47.687 WRITE: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=657MiB (689MB), run=5001-5001msec 00:13:47.687 ----------------------------------------------------- 00:13:47.687 Suppressions used: 00:13:47.687 count bytes template 00:13:47.687 1 11 /usr/src/fio/parse.c 00:13:47.687 1 8 libtcmalloc_minimal.so 00:13:47.687 1 904 libcrypto.so 00:13:47.687 ----------------------------------------------------- 00:13:47.687 00:13:47.687 00:13:47.687 real 0m13.772s 00:13:47.687 user 0m6.127s 00:13:47.687 sys 0m7.160s 00:13:47.687 01:39:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.687 ************************************ 00:13:47.687 END TEST xnvme_fio_plugin 00:13:47.687 ************************************ 00:13:47.687 01:39:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:47.687 01:39:31 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:47.687 01:39:31 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:47.687 01:39:31 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:47.687 01:39:31 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:47.687 01:39:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:47.687 01:39:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.687 01:39:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:47.687 ************************************ 00:13:47.687 START TEST xnvme_rpc 00:13:47.687 ************************************ 00:13:47.687 01:39:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:47.687 01:39:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:47.687 01:39:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:47.687 01:39:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:47.687 01:39:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:47.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.687 01:39:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70230 00:13:47.687 01:39:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70230 00:13:47.687 01:39:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70230 ']' 00:13:47.687 01:39:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.687 01:39:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.687 01:39:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.687 01:39:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.687 01:39:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.687 01:39:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:47.687 [2024-11-21 01:39:31.636976] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:13:47.687 [2024-11-21 01:39:31.637130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70230 ] 00:13:47.949 [2024-11-21 01:39:31.798144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.211 [2024-11-21 01:39:31.916187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.785 xnvme_bdev 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.785 01:39:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:49.047 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.047 01:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:49.047 01:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:49.047 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.047 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.047 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.047 01:39:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70230 00:13:49.047 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70230 ']' 00:13:49.047 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70230 00:13:49.047 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:49.047 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.047 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70230 00:13:49.047 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.047 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.047 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70230' 00:13:49.047 killing process with pid 70230 00:13:49.047 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70230 00:13:49.047 01:39:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70230 00:13:50.961 00:13:50.961 real 0m2.875s 00:13:50.961 user 0m2.889s 00:13:50.961 sys 0m0.480s 00:13:50.961 ************************************ 00:13:50.961 END TEST xnvme_rpc 00:13:50.961 ************************************ 00:13:50.961 01:39:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.961 01:39:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.961 01:39:34 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:50.961 01:39:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:50.961 01:39:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.961 01:39:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:50.961 ************************************ 00:13:50.961 START TEST xnvme_bdevperf 00:13:50.961 ************************************ 00:13:50.961 01:39:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:50.961 01:39:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:50.961 01:39:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:50.961 01:39:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:50.961 01:39:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:50.961 01:39:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:50.961 01:39:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:50.961 01:39:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:50.961 { 00:13:50.961 "subsystems": [ 00:13:50.961 { 00:13:50.961 "subsystem": "bdev", 00:13:50.961 "config": [ 00:13:50.961 { 00:13:50.961 "params": { 00:13:50.961 "io_mechanism": "io_uring", 00:13:50.961 "conserve_cpu": true, 00:13:50.961 "filename": "/dev/nvme0n1", 00:13:50.961 "name": "xnvme_bdev" 00:13:50.961 }, 00:13:50.962 "method": "bdev_xnvme_create" 00:13:50.962 }, 00:13:50.962 { 00:13:50.962 "method": "bdev_wait_for_examine" 00:13:50.962 } 00:13:50.962 ] 00:13:50.962 } 00:13:50.962 ] 00:13:50.962 } 00:13:50.962 [2024-11-21 01:39:34.575433] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:13:50.962 [2024-11-21 01:39:34.575581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70299 ] 00:13:50.962 [2024-11-21 01:39:34.739916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.962 [2024-11-21 01:39:34.856727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.220 Running I/O for 5 seconds... 00:13:53.179 37094.00 IOPS, 144.90 MiB/s [2024-11-21T01:39:38.524Z] 36701.00 IOPS, 143.36 MiB/s [2024-11-21T01:39:39.468Z] 35427.67 IOPS, 138.39 MiB/s [2024-11-21T01:39:40.412Z] 34738.25 IOPS, 135.70 MiB/s 00:13:56.455 Latency(us) 00:13:56.455 [2024-11-21T01:39:40.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.455 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:56.455 xnvme_bdev : 5.00 34295.85 133.97 0.00 0.00 1862.29 831.80 8822.15 00:13:56.455 [2024-11-21T01:39:40.412Z] =================================================================================================================== 00:13:56.455 [2024-11-21T01:39:40.412Z] Total : 34295.85 133.97 0.00 0.00 1862.29 831.80 8822.15 00:13:57.028 01:39:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:57.028 01:39:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:57.028 01:39:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:57.029 01:39:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:57.029 01:39:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:57.029 { 00:13:57.029 "subsystems": [ 00:13:57.029 { 00:13:57.029 "subsystem": "bdev", 00:13:57.029 "config": [ 00:13:57.029 { 00:13:57.029 "params": { 00:13:57.029 "io_mechanism": "io_uring", 00:13:57.029 "conserve_cpu": true, 00:13:57.029 "filename": "/dev/nvme0n1", 00:13:57.029 "name": "xnvme_bdev" 00:13:57.029 }, 00:13:57.029 "method": "bdev_xnvme_create" 00:13:57.029 }, 00:13:57.029 { 00:13:57.029 "method": "bdev_wait_for_examine" 00:13:57.029 } 00:13:57.029 ] 00:13:57.029 } 00:13:57.029 ] 00:13:57.029 } 00:13:57.029 [2024-11-21 01:39:40.980217] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:13:57.029 [2024-11-21 01:39:40.980554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70375 ] 00:13:57.290 [2024-11-21 01:39:41.142205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.551 [2024-11-21 01:39:41.265484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.812 Running I/O for 5 seconds... 00:13:59.700 33615.00 IOPS, 131.31 MiB/s [2024-11-21T01:39:44.600Z] 33482.50 IOPS, 130.79 MiB/s [2024-11-21T01:39:45.986Z] 33637.33 IOPS, 131.40 MiB/s [2024-11-21T01:39:46.559Z] 33986.00 IOPS, 132.76 MiB/s [2024-11-21T01:39:46.559Z] 33836.00 IOPS, 132.17 MiB/s 00:14:02.602 Latency(us) 00:14:02.602 [2024-11-21T01:39:46.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.602 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:02.602 xnvme_bdev : 5.00 33817.53 132.10 0.00 0.00 1888.40 601.80 9124.63 00:14:02.602 [2024-11-21T01:39:46.559Z] =================================================================================================================== 00:14:02.602 [2024-11-21T01:39:46.559Z] Total : 33817.53 132.10 0.00 0.00 1888.40 601.80 9124.63 00:14:03.546 00:14:03.546 real 0m12.839s 00:14:03.546 user 0m8.407s 00:14:03.546 sys 0m3.869s 00:14:03.546 ************************************ 00:14:03.546 END TEST xnvme_bdevperf 00:14:03.546 ************************************ 00:14:03.546 01:39:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.546 01:39:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:03.546 01:39:47 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:03.546 01:39:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:03.546 01:39:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.546 01:39:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:03.546 ************************************ 00:14:03.546 START TEST xnvme_fio_plugin 00:14:03.546 ************************************ 00:14:03.546 01:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:03.546 01:39:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:03.546 01:39:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:03.547 01:39:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:03.547 { 00:14:03.547 "subsystems": [ 00:14:03.547 { 00:14:03.547 "subsystem": "bdev", 00:14:03.547 "config": [ 00:14:03.547 { 00:14:03.547 "params": { 00:14:03.547 "io_mechanism": "io_uring", 00:14:03.547 "conserve_cpu": true, 00:14:03.547 "filename": "/dev/nvme0n1", 00:14:03.547 "name": "xnvme_bdev" 00:14:03.547 }, 00:14:03.547 "method": "bdev_xnvme_create" 00:14:03.547 }, 00:14:03.547 { 00:14:03.547 "method": "bdev_wait_for_examine" 00:14:03.547 } 00:14:03.547 ] 00:14:03.547 } 00:14:03.547 ] 00:14:03.547 } 00:14:03.807 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:03.807 fio-3.35 00:14:03.807 Starting 1 thread 00:14:10.396 00:14:10.396 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70494: Thu Nov 21 01:39:53 2024 00:14:10.396 read: IOPS=32.1k, BW=125MiB/s (132MB/s)(628MiB/5002msec) 00:14:10.396 slat (nsec): min=2738, max=88953, avg=3397.21, stdev=1859.86 00:14:10.396 clat (usec): min=1155, max=3813, avg=1851.51, stdev=288.30 00:14:10.396 lat (usec): min=1157, max=3843, avg=1854.90, stdev=288.59 00:14:10.396 clat percentiles (usec): 00:14:10.396 | 1.00th=[ 1336], 5.00th=[ 1450], 10.00th=[ 1516], 20.00th=[ 1598], 00:14:10.396 | 30.00th=[ 1680], 40.00th=[ 1745], 50.00th=[ 1827], 60.00th=[ 1893], 00:14:10.396 | 70.00th=[ 1975], 80.00th=[ 2073], 90.00th=[ 2245], 95.00th=[ 2376], 00:14:10.396 | 99.00th=[ 2638], 99.50th=[ 2802], 99.90th=[ 3326], 99.95th=[ 3458], 00:14:10.396 | 99.99th=[ 3687] 00:14:10.396 bw ( KiB/s): min=125440, max=131584, per=99.91%, avg=128398.22, stdev=1614.58, samples=9 00:14:10.396 iops : min=31360, max=32896, avg=32099.56, stdev=403.65, samples=9 00:14:10.396 lat (msec) : 2=72.29%, 4=27.71% 00:14:10.396 cpu : usr=58.99%, sys=37.19%, ctx=8, majf=0, minf=762 00:14:10.396 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:10.396 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:10.397 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:10.397 issued rwts: total=160704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:10.397 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:10.397 00:14:10.397 Run status group 0 (all jobs): 00:14:10.397 READ: bw=125MiB/s (132MB/s), 125MiB/s-125MiB/s (132MB/s-132MB/s), io=628MiB (658MB), run=5002-5002msec 00:14:10.397 ----------------------------------------------------- 00:14:10.397 Suppressions used: 00:14:10.397 count bytes template 00:14:10.397 1 11 /usr/src/fio/parse.c 00:14:10.397 1 8 libtcmalloc_minimal.so 00:14:10.397 1 904 libcrypto.so 00:14:10.397 ----------------------------------------------------- 00:14:10.397 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:10.397 01:39:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:10.397 { 00:14:10.397 "subsystems": [ 00:14:10.397 { 00:14:10.397 "subsystem": "bdev", 00:14:10.397 "config": [ 00:14:10.397 { 00:14:10.397 "params": { 00:14:10.397 "io_mechanism": "io_uring", 00:14:10.397 "conserve_cpu": true, 00:14:10.397 "filename": "/dev/nvme0n1", 00:14:10.397 "name": "xnvme_bdev" 00:14:10.397 }, 00:14:10.397 "method": "bdev_xnvme_create" 00:14:10.397 }, 00:14:10.397 { 00:14:10.397 "method": "bdev_wait_for_examine" 00:14:10.397 } 00:14:10.397 ] 00:14:10.397 } 00:14:10.397 ] 00:14:10.397 } 00:14:10.656 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:10.656 fio-3.35 00:14:10.656 Starting 1 thread 00:14:17.292 00:14:17.292 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70580: Thu Nov 21 01:40:00 2024 00:14:17.292 write: IOPS=38.6k, BW=151MiB/s (158MB/s)(754MiB/5002msec); 0 zone resets 00:14:17.292 slat (nsec): min=2786, max=73157, avg=3348.58, stdev=1533.08 00:14:17.292 clat (usec): min=832, max=6717, avg=1524.96, stdev=273.26 00:14:17.292 lat (usec): min=835, max=6720, avg=1528.31, stdev=273.60 00:14:17.292 clat percentiles (usec): 00:14:17.292 | 1.00th=[ 1074], 5.00th=[ 1156], 10.00th=[ 1221], 20.00th=[ 1287], 00:14:17.292 | 30.00th=[ 1369], 40.00th=[ 1434], 50.00th=[ 1500], 60.00th=[ 1549], 00:14:17.292 | 70.00th=[ 1631], 80.00th=[ 1729], 90.00th=[ 1876], 95.00th=[ 2024], 00:14:17.292 | 99.00th=[ 2343], 99.50th=[ 2474], 99.90th=[ 2802], 99.95th=[ 2966], 00:14:17.292 | 99.99th=[ 4621] 00:14:17.292 bw ( KiB/s): min=139208, max=160632, per=100.00%, avg=155305.78, stdev=7101.09, samples=9 00:14:17.292 iops : min=34802, max=40158, avg=38826.44, stdev=1775.27, samples=9 00:14:17.292 lat (usec) : 1000=0.18% 00:14:17.292 lat (msec) : 2=94.31%, 4=5.49%, 10=0.01% 00:14:17.292 cpu : usr=66.39%, sys=30.59%, ctx=11, majf=0, minf=762 00:14:17.292 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:17.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.292 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:17.292 issued rwts: total=0,193141,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.292 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:17.292 00:14:17.292 Run status group 0 (all jobs): 00:14:17.292 WRITE: bw=151MiB/s (158MB/s), 151MiB/s-151MiB/s (158MB/s-158MB/s), io=754MiB (791MB), run=5002-5002msec 00:14:17.292 ----------------------------------------------------- 00:14:17.292 Suppressions used: 00:14:17.292 count bytes template 00:14:17.292 1 11 /usr/src/fio/parse.c 00:14:17.292 1 8 libtcmalloc_minimal.so 00:14:17.292 1 904 libcrypto.so 00:14:17.292 ----------------------------------------------------- 00:14:17.292 00:14:17.292 ************************************ 00:14:17.292 END TEST xnvme_fio_plugin 00:14:17.292 ************************************ 00:14:17.292 00:14:17.292 real 0m13.624s 00:14:17.292 user 0m9.046s 00:14:17.292 sys 0m3.909s 00:14:17.292 01:40:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:17.292 01:40:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:17.292 01:40:01 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:17.292 01:40:01 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:17.292 01:40:01 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:17.292 01:40:01 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:17.292 01:40:01 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:17.292 01:40:01 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:17.292 01:40:01 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:17.292 01:40:01 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:17.292 01:40:01 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:17.292 01:40:01 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:17.292 01:40:01 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:17.292 01:40:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.292 ************************************ 00:14:17.292 START TEST xnvme_rpc 00:14:17.292 ************************************ 00:14:17.292 01:40:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:17.292 01:40:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:17.292 01:40:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:17.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.292 01:40:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:17.292 01:40:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:17.292 01:40:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70666 00:14:17.292 01:40:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70666 00:14:17.292 01:40:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70666 ']' 00:14:17.292 01:40:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.292 01:40:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.293 01:40:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:17.293 01:40:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.293 01:40:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.293 01:40:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.293 [2024-11-21 01:40:01.187310] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:14:17.293 [2024-11-21 01:40:01.187671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70666 ] 00:14:17.549 [2024-11-21 01:40:01.351141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.549 [2024-11-21 01:40:01.452626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.115 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.115 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:18.115 01:40:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:18.115 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.115 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.115 xnvme_bdev 00:14:18.115 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.115 01:40:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:18.115 01:40:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:18.115 01:40:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:18.115 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.115 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.115 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:18.373 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.374 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.374 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.374 01:40:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70666 00:14:18.374 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70666 ']' 00:14:18.374 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70666 00:14:18.374 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:18.374 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:18.374 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70666 00:14:18.374 killing process with pid 70666 00:14:18.374 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:18.374 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:18.374 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70666' 00:14:18.374 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70666 00:14:18.374 01:40:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70666 00:14:19.751 ************************************ 00:14:19.751 END TEST xnvme_rpc 00:14:19.751 ************************************ 00:14:19.751 00:14:19.751 real 0m2.605s 00:14:19.751 user 0m2.706s 00:14:19.751 sys 0m0.379s 00:14:19.751 01:40:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:19.751 01:40:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.011 01:40:03 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:20.011 01:40:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:20.011 01:40:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:20.011 01:40:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.011 ************************************ 00:14:20.011 START TEST xnvme_bdevperf 00:14:20.011 ************************************ 00:14:20.011 01:40:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:20.011 01:40:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:20.011 01:40:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:20.011 01:40:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:20.011 01:40:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:20.011 01:40:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:20.011 01:40:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:20.011 01:40:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:20.011 { 00:14:20.011 "subsystems": [ 00:14:20.011 { 00:14:20.011 "subsystem": "bdev", 00:14:20.011 "config": [ 00:14:20.011 { 00:14:20.012 "params": { 00:14:20.012 "io_mechanism": "io_uring_cmd", 00:14:20.012 "conserve_cpu": false, 00:14:20.012 "filename": "/dev/ng0n1", 00:14:20.012 "name": "xnvme_bdev" 00:14:20.012 }, 00:14:20.012 "method": "bdev_xnvme_create" 00:14:20.012 }, 00:14:20.012 { 00:14:20.012 "method": "bdev_wait_for_examine" 00:14:20.012 } 00:14:20.012 ] 00:14:20.012 } 00:14:20.012 ] 00:14:20.012 } 00:14:20.012 [2024-11-21 01:40:03.827830] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:14:20.012 [2024-11-21 01:40:03.828082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70735 ] 00:14:20.272 [2024-11-21 01:40:03.988231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.272 [2024-11-21 01:40:04.081874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.533 Running I/O for 5 seconds... 00:14:22.417 38010.00 IOPS, 148.48 MiB/s [2024-11-21T01:40:07.761Z] 37499.50 IOPS, 146.48 MiB/s [2024-11-21T01:40:08.334Z] 36267.00 IOPS, 141.67 MiB/s [2024-11-21T01:40:09.725Z] 35537.75 IOPS, 138.82 MiB/s 00:14:25.768 Latency(us) 00:14:25.768 [2024-11-21T01:40:09.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.768 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:25.768 xnvme_bdev : 5.00 35075.87 137.02 0.00 0.00 1820.92 335.56 9729.58 00:14:25.768 [2024-11-21T01:40:09.725Z] =================================================================================================================== 00:14:25.768 [2024-11-21T01:40:09.725Z] Total : 35075.87 137.02 0.00 0.00 1820.92 335.56 9729.58 00:14:26.341 01:40:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:26.341 01:40:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:26.341 01:40:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:26.341 01:40:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:26.341 01:40:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:26.341 { 00:14:26.341 "subsystems": [ 00:14:26.341 { 00:14:26.341 "subsystem": "bdev", 00:14:26.341 "config": [ 00:14:26.341 { 00:14:26.341 "params": { 00:14:26.341 "io_mechanism": "io_uring_cmd", 00:14:26.341 "conserve_cpu": false, 00:14:26.341 "filename": "/dev/ng0n1", 00:14:26.341 "name": "xnvme_bdev" 00:14:26.341 }, 00:14:26.341 "method": "bdev_xnvme_create" 00:14:26.341 }, 00:14:26.341 { 00:14:26.341 "method": "bdev_wait_for_examine" 00:14:26.341 } 00:14:26.341 ] 00:14:26.341 } 00:14:26.341 ] 00:14:26.341 } 00:14:26.341 [2024-11-21 01:40:10.184602] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:14:26.341 [2024-11-21 01:40:10.184765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70809 ] 00:14:26.603 [2024-11-21 01:40:10.350989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.603 [2024-11-21 01:40:10.473194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.863 Running I/O for 5 seconds... 00:14:29.196 33955.00 IOPS, 132.64 MiB/s [2024-11-21T01:40:14.097Z] 34343.00 IOPS, 134.15 MiB/s [2024-11-21T01:40:15.041Z] 34379.00 IOPS, 134.29 MiB/s [2024-11-21T01:40:15.987Z] 34493.50 IOPS, 134.74 MiB/s 00:14:32.030 Latency(us) 00:14:32.030 [2024-11-21T01:40:15.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.030 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:32.030 xnvme_bdev : 5.00 34505.60 134.79 0.00 0.00 1850.81 337.13 5116.85 00:14:32.030 [2024-11-21T01:40:15.987Z] =================================================================================================================== 00:14:32.030 [2024-11-21T01:40:15.987Z] Total : 34505.60 134.79 0.00 0.00 1850.81 337.13 5116.85 00:14:32.600 01:40:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:32.600 01:40:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:32.600 01:40:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:32.600 01:40:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:32.600 01:40:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:32.600 { 00:14:32.600 "subsystems": [ 00:14:32.600 { 00:14:32.600 "subsystem": "bdev", 00:14:32.600 "config": [ 00:14:32.600 { 00:14:32.600 "params": { 00:14:32.600 "io_mechanism": "io_uring_cmd", 00:14:32.600 "conserve_cpu": false, 00:14:32.600 "filename": "/dev/ng0n1", 00:14:32.600 "name": "xnvme_bdev" 00:14:32.600 }, 00:14:32.600 "method": "bdev_xnvme_create" 00:14:32.600 }, 00:14:32.600 { 00:14:32.600 "method": "bdev_wait_for_examine" 00:14:32.600 } 00:14:32.600 ] 00:14:32.600 } 00:14:32.600 ] 00:14:32.600 } 00:14:32.861 [2024-11-21 01:40:16.586857] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:14:32.861 [2024-11-21 01:40:16.587000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70889 ] 00:14:32.861 [2024-11-21 01:40:16.751526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.121 [2024-11-21 01:40:16.869523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.383 Running I/O for 5 seconds... 00:14:35.269 79232.00 IOPS, 309.50 MiB/s [2024-11-21T01:40:20.164Z] 79648.00 IOPS, 311.12 MiB/s [2024-11-21T01:40:21.546Z] 79552.00 IOPS, 310.75 MiB/s [2024-11-21T01:40:22.487Z] 78080.00 IOPS, 305.00 MiB/s 00:14:38.530 Latency(us) 00:14:38.530 [2024-11-21T01:40:22.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.530 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:38.530 xnvme_bdev : 5.00 80671.94 315.12 0.00 0.00 789.91 428.50 3604.48 00:14:38.530 [2024-11-21T01:40:22.487Z] =================================================================================================================== 00:14:38.530 [2024-11-21T01:40:22.487Z] Total : 80671.94 315.12 0.00 0.00 789.91 428.50 3604.48 00:14:38.790 01:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:38.790 01:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:38.790 01:40:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:38.790 01:40:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:38.790 01:40:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:38.790 { 00:14:38.790 "subsystems": [ 00:14:38.790 { 00:14:38.790 "subsystem": "bdev", 00:14:38.790 "config": [ 00:14:38.790 { 00:14:38.790 "params": { 00:14:38.790 "io_mechanism": "io_uring_cmd", 00:14:38.790 "conserve_cpu": false, 00:14:38.790 "filename": "/dev/ng0n1", 00:14:38.790 "name": "xnvme_bdev" 00:14:38.790 }, 00:14:38.790 "method": "bdev_xnvme_create" 00:14:38.790 }, 00:14:38.790 { 00:14:38.790 "method": "bdev_wait_for_examine" 00:14:38.790 } 00:14:38.790 ] 00:14:38.790 } 00:14:38.790 ] 00:14:38.790 } 00:14:39.049 [2024-11-21 01:40:22.766509] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:14:39.049 [2024-11-21 01:40:22.766638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70958 ] 00:14:39.049 [2024-11-21 01:40:22.922693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.049 [2024-11-21 01:40:23.000192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.309 Running I/O for 5 seconds... 00:14:41.640 3772.00 IOPS, 14.73 MiB/s [2024-11-21T01:40:26.537Z] 2307.00 IOPS, 9.01 MiB/s [2024-11-21T01:40:27.480Z] 10516.33 IOPS, 41.08 MiB/s [2024-11-21T01:40:28.453Z] 20407.75 IOPS, 79.72 MiB/s [2024-11-21T01:40:28.453Z] 25689.00 IOPS, 100.35 MiB/s 00:14:44.496 Latency(us) 00:14:44.496 [2024-11-21T01:40:28.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.496 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:44.496 xnvme_bdev : 5.00 25672.55 100.28 0.00 0.00 2488.02 64.20 217781.17 00:14:44.496 [2024-11-21T01:40:28.453Z] =================================================================================================================== 00:14:44.496 [2024-11-21T01:40:28.453Z] Total : 25672.55 100.28 0.00 0.00 2488.02 64.20 217781.17 00:14:45.067 ************************************ 00:14:45.067 END TEST xnvme_bdevperf 00:14:45.067 ************************************ 00:14:45.067 00:14:45.067 real 0m25.192s 00:14:45.067 user 0m13.836s 00:14:45.067 sys 0m10.861s 00:14:45.067 01:40:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:45.067 01:40:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:45.067 01:40:29 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:45.067 01:40:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:45.067 01:40:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:45.067 01:40:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:45.328 ************************************ 00:14:45.328 START TEST xnvme_fio_plugin 00:14:45.328 ************************************ 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:45.328 01:40:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:45.328 { 00:14:45.328 "subsystems": [ 00:14:45.328 { 00:14:45.328 "subsystem": "bdev", 00:14:45.328 "config": [ 00:14:45.328 { 00:14:45.328 "params": { 00:14:45.328 "io_mechanism": "io_uring_cmd", 00:14:45.328 "conserve_cpu": false, 00:14:45.328 "filename": "/dev/ng0n1", 00:14:45.328 "name": "xnvme_bdev" 00:14:45.328 }, 00:14:45.328 "method": "bdev_xnvme_create" 00:14:45.328 }, 00:14:45.328 { 00:14:45.328 "method": "bdev_wait_for_examine" 00:14:45.328 } 00:14:45.328 ] 00:14:45.328 } 00:14:45.328 ] 00:14:45.328 } 00:14:45.328 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:45.328 fio-3.35 00:14:45.328 Starting 1 thread 00:14:51.919 00:14:51.919 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71073: Thu Nov 21 01:40:34 2024 00:14:51.919 read: IOPS=36.3k, BW=142MiB/s (149MB/s)(710MiB/5002msec) 00:14:51.919 slat (nsec): min=2734, max=92974, avg=3729.12, stdev=2181.04 00:14:51.920 clat (usec): min=898, max=6163, avg=1609.04, stdev=293.39 00:14:51.920 lat (usec): min=901, max=6166, avg=1612.77, stdev=293.85 00:14:51.920 clat percentiles (usec): 00:14:51.920 | 1.00th=[ 1074], 5.00th=[ 1188], 10.00th=[ 1270], 20.00th=[ 1385], 00:14:51.920 | 30.00th=[ 1450], 40.00th=[ 1516], 50.00th=[ 1565], 60.00th=[ 1647], 00:14:51.920 | 70.00th=[ 1729], 80.00th=[ 1827], 90.00th=[ 1991], 95.00th=[ 2114], 00:14:51.920 | 99.00th=[ 2376], 99.50th=[ 2507], 99.90th=[ 2802], 99.95th=[ 3163], 00:14:51.920 | 99.99th=[ 6128] 00:14:51.920 bw ( KiB/s): min=138240, max=169984, per=100.00%, avg=145351.11, stdev=9563.05, samples=9 00:14:51.920 iops : min=34560, max=42496, avg=36337.78, stdev=2390.76, samples=9 00:14:51.920 lat (usec) : 1000=0.20% 00:14:51.920 lat (msec) : 2=90.46%, 4=9.30%, 10=0.04% 00:14:51.920 cpu : usr=34.37%, sys=64.29%, ctx=13, majf=0, minf=762 00:14:51.920 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:51.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.920 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:51.920 issued rwts: total=181696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:51.920 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:51.920 00:14:51.920 Run status group 0 (all jobs): 00:14:51.920 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=710MiB (744MB), run=5002-5002msec 00:14:51.920 ----------------------------------------------------- 00:14:51.920 Suppressions used: 00:14:51.920 count bytes template 00:14:51.920 1 11 /usr/src/fio/parse.c 00:14:51.920 1 8 libtcmalloc_minimal.so 00:14:51.920 1 904 libcrypto.so 00:14:51.920 ----------------------------------------------------- 00:14:51.920 00:14:52.181 01:40:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:52.182 01:40:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:52.182 01:40:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:52.182 01:40:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:52.182 01:40:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:52.182 01:40:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:52.182 01:40:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:52.182 01:40:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:52.182 01:40:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:52.182 01:40:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:52.182 01:40:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:52.182 01:40:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:52.182 01:40:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:52.182 01:40:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:52.182 01:40:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:52.182 01:40:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:52.182 01:40:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:52.182 01:40:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:52.182 01:40:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:52.182 01:40:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:52.182 01:40:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:52.182 { 00:14:52.182 "subsystems": [ 00:14:52.182 { 00:14:52.182 "subsystem": "bdev", 00:14:52.182 "config": [ 00:14:52.182 { 00:14:52.182 "params": { 00:14:52.182 "io_mechanism": "io_uring_cmd", 00:14:52.182 "conserve_cpu": false, 00:14:52.182 "filename": "/dev/ng0n1", 00:14:52.182 "name": "xnvme_bdev" 00:14:52.182 }, 00:14:52.182 "method": "bdev_xnvme_create" 00:14:52.182 }, 00:14:52.182 { 00:14:52.182 "method": "bdev_wait_for_examine" 00:14:52.182 } 00:14:52.182 ] 00:14:52.182 } 00:14:52.182 ] 00:14:52.182 } 00:14:52.182 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:52.182 fio-3.35 00:14:52.182 Starting 1 thread 00:14:58.768 00:14:58.768 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71166: Thu Nov 21 01:40:41 2024 00:14:58.768 write: IOPS=37.1k, BW=145MiB/s (152MB/s)(725MiB/5001msec); 0 zone resets 00:14:58.768 slat (usec): min=2, max=130, avg= 4.12, stdev= 2.34 00:14:58.768 clat (usec): min=168, max=6686, avg=1558.99, stdev=303.20 00:14:58.768 lat (usec): min=172, max=6694, avg=1563.11, stdev=303.77 00:14:58.768 clat percentiles (usec): 00:14:58.768 | 1.00th=[ 1037], 5.00th=[ 1139], 10.00th=[ 1205], 20.00th=[ 1303], 00:14:58.769 | 30.00th=[ 1385], 40.00th=[ 1467], 50.00th=[ 1532], 60.00th=[ 1598], 00:14:58.769 | 70.00th=[ 1680], 80.00th=[ 1778], 90.00th=[ 1926], 95.00th=[ 2057], 00:14:58.769 | 99.00th=[ 2409], 99.50th=[ 2606], 99.90th=[ 3490], 99.95th=[ 3785], 00:14:58.769 | 99.99th=[ 4883] 00:14:58.769 bw ( KiB/s): min=135800, max=169616, per=100.00%, avg=148633.78, stdev=12220.57, samples=9 00:14:58.769 iops : min=33950, max=42404, avg=37158.44, stdev=3055.14, samples=9 00:14:58.769 lat (usec) : 250=0.01%, 500=0.02%, 750=0.20%, 1000=0.34% 00:14:58.769 lat (msec) : 2=92.73%, 4=6.66%, 10=0.03% 00:14:58.769 cpu : usr=33.42%, sys=65.14%, ctx=11, majf=0, minf=762 00:14:58.769 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=12.2%, 16=24.7%, 32=50.9%, >=64=1.6% 00:14:58.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.769 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:58.769 issued rwts: total=0,185620,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:58.769 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:58.769 00:14:58.769 Run status group 0 (all jobs): 00:14:58.769 WRITE: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=725MiB (760MB), run=5001-5001msec 00:14:59.029 ----------------------------------------------------- 00:14:59.029 Suppressions used: 00:14:59.029 count bytes template 00:14:59.029 1 11 /usr/src/fio/parse.c 00:14:59.029 1 8 libtcmalloc_minimal.so 00:14:59.029 1 904 libcrypto.so 00:14:59.029 ----------------------------------------------------- 00:14:59.029 00:14:59.029 00:14:59.029 real 0m13.733s 00:14:59.029 user 0m6.210s 00:14:59.029 sys 0m7.050s 00:14:59.029 01:40:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.029 ************************************ 00:14:59.029 END TEST xnvme_fio_plugin 00:14:59.029 ************************************ 00:14:59.029 01:40:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:59.029 01:40:42 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:59.029 01:40:42 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:59.029 01:40:42 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:59.029 01:40:42 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:59.029 01:40:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:59.029 01:40:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.029 01:40:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:59.029 ************************************ 00:14:59.029 START TEST xnvme_rpc 00:14:59.029 ************************************ 00:14:59.029 01:40:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:59.029 01:40:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:59.029 01:40:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:59.029 01:40:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:59.029 01:40:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:59.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.029 01:40:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71250 00:14:59.029 01:40:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71250 00:14:59.029 01:40:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71250 ']' 00:14:59.029 01:40:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.029 01:40:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:59.029 01:40:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.029 01:40:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:59.029 01:40:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.029 01:40:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:59.029 [2024-11-21 01:40:42.909561] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:14:59.029 [2024-11-21 01:40:42.909734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71250 ] 00:14:59.290 [2024-11-21 01:40:43.073830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.290 [2024-11-21 01:40:43.201229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.231 xnvme_bdev 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.231 01:40:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.232 01:40:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:00.232 01:40:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.232 01:40:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:00.232 01:40:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:00.232 01:40:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:00.232 01:40:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:00.232 01:40:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.232 01:40:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.232 01:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.232 01:40:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:00.232 01:40:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:00.232 01:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.232 01:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.232 01:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.232 01:40:44 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71250 00:15:00.232 01:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71250 ']' 00:15:00.232 01:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71250 00:15:00.232 01:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:00.232 01:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:00.232 01:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71250 00:15:00.232 killing process with pid 71250 00:15:00.232 01:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:00.232 01:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:00.232 01:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71250' 00:15:00.232 01:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71250 00:15:00.232 01:40:44 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71250 00:15:02.148 00:15:02.148 real 0m2.902s 00:15:02.148 user 0m2.885s 00:15:02.148 sys 0m0.485s 00:15:02.148 01:40:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.148 01:40:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.148 ************************************ 00:15:02.148 END TEST xnvme_rpc 00:15:02.148 ************************************ 00:15:02.148 01:40:45 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:02.148 01:40:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:02.148 01:40:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.148 01:40:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:02.148 ************************************ 00:15:02.148 START TEST xnvme_bdevperf 00:15:02.148 ************************************ 00:15:02.148 01:40:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:02.148 01:40:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:02.148 01:40:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:02.148 01:40:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:02.148 01:40:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:02.148 01:40:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:02.148 01:40:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:02.148 01:40:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:02.148 { 00:15:02.148 "subsystems": [ 00:15:02.148 { 00:15:02.148 "subsystem": "bdev", 00:15:02.148 "config": [ 00:15:02.148 { 00:15:02.148 "params": { 00:15:02.148 "io_mechanism": "io_uring_cmd", 00:15:02.148 "conserve_cpu": true, 00:15:02.148 "filename": "/dev/ng0n1", 00:15:02.148 "name": "xnvme_bdev" 00:15:02.148 }, 00:15:02.148 "method": "bdev_xnvme_create" 00:15:02.148 }, 00:15:02.148 { 00:15:02.148 "method": "bdev_wait_for_examine" 00:15:02.148 } 00:15:02.148 ] 00:15:02.148 } 00:15:02.148 ] 00:15:02.148 } 00:15:02.148 [2024-11-21 01:40:45.869288] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:15:02.148 [2024-11-21 01:40:45.869430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71322 ] 00:15:02.148 [2024-11-21 01:40:46.032967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.409 [2024-11-21 01:40:46.160867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.670 Running I/O for 5 seconds... 00:15:04.554 41920.00 IOPS, 163.75 MiB/s [2024-11-21T01:40:49.454Z] 38976.00 IOPS, 152.25 MiB/s [2024-11-21T01:40:50.839Z] 38272.00 IOPS, 149.50 MiB/s [2024-11-21T01:40:51.782Z] 37584.00 IOPS, 146.81 MiB/s 00:15:07.825 Latency(us) 00:15:07.825 [2024-11-21T01:40:51.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.825 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:07.825 xnvme_bdev : 5.00 36951.01 144.34 0.00 0.00 1727.80 869.61 4839.58 00:15:07.825 [2024-11-21T01:40:51.782Z] =================================================================================================================== 00:15:07.825 [2024-11-21T01:40:51.782Z] Total : 36951.01 144.34 0.00 0.00 1727.80 869.61 4839.58 00:15:08.398 01:40:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:08.398 01:40:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:08.398 01:40:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:08.398 01:40:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:08.398 01:40:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:08.398 { 00:15:08.398 "subsystems": [ 00:15:08.398 { 00:15:08.398 "subsystem": "bdev", 00:15:08.398 "config": [ 00:15:08.398 { 00:15:08.398 "params": { 00:15:08.398 "io_mechanism": "io_uring_cmd", 00:15:08.398 "conserve_cpu": true, 00:15:08.398 "filename": "/dev/ng0n1", 00:15:08.398 "name": "xnvme_bdev" 00:15:08.398 }, 00:15:08.398 "method": "bdev_xnvme_create" 00:15:08.398 }, 00:15:08.398 { 00:15:08.398 "method": "bdev_wait_for_examine" 00:15:08.398 } 00:15:08.398 ] 00:15:08.398 } 00:15:08.398 ] 00:15:08.398 } 00:15:08.398 [2024-11-21 01:40:52.299522] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:15:08.398 [2024-11-21 01:40:52.300070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71396 ] 00:15:08.658 [2024-11-21 01:40:52.461601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.659 [2024-11-21 01:40:52.578956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.919 Running I/O for 5 seconds... 00:15:11.252 37347.00 IOPS, 145.89 MiB/s [2024-11-21T01:40:56.153Z] 36764.00 IOPS, 143.61 MiB/s [2024-11-21T01:40:57.099Z] 37231.33 IOPS, 145.43 MiB/s [2024-11-21T01:40:58.042Z] 37200.50 IOPS, 145.31 MiB/s [2024-11-21T01:40:58.042Z] 37362.40 IOPS, 145.95 MiB/s 00:15:14.085 Latency(us) 00:15:14.085 [2024-11-21T01:40:58.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.085 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:14.085 xnvme_bdev : 5.00 37359.01 145.93 0.00 0.00 1708.55 693.17 6956.90 00:15:14.085 [2024-11-21T01:40:58.042Z] =================================================================================================================== 00:15:14.085 [2024-11-21T01:40:58.042Z] Total : 37359.01 145.93 0.00 0.00 1708.55 693.17 6956.90 00:15:15.028 01:40:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:15.028 01:40:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:15.028 01:40:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:15.028 01:40:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:15.028 01:40:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:15.028 { 00:15:15.028 "subsystems": [ 00:15:15.028 { 00:15:15.028 "subsystem": "bdev", 00:15:15.028 "config": [ 00:15:15.028 { 00:15:15.028 "params": { 00:15:15.028 "io_mechanism": "io_uring_cmd", 00:15:15.028 "conserve_cpu": true, 00:15:15.028 "filename": "/dev/ng0n1", 00:15:15.028 "name": "xnvme_bdev" 00:15:15.028 }, 00:15:15.028 "method": "bdev_xnvme_create" 00:15:15.028 }, 00:15:15.028 { 00:15:15.028 "method": "bdev_wait_for_examine" 00:15:15.028 } 00:15:15.028 ] 00:15:15.028 } 00:15:15.028 ] 00:15:15.028 } 00:15:15.028 [2024-11-21 01:40:58.799631] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:15:15.028 [2024-11-21 01:40:58.799770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71477 ] 00:15:15.028 [2024-11-21 01:40:58.968539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.289 [2024-11-21 01:40:59.103887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.550 Running I/O for 5 seconds... 00:15:17.872 74176.00 IOPS, 289.75 MiB/s [2024-11-21T01:41:02.784Z] 76064.00 IOPS, 297.12 MiB/s [2024-11-21T01:41:03.787Z] 78784.00 IOPS, 307.75 MiB/s [2024-11-21T01:41:04.728Z] 81952.00 IOPS, 320.12 MiB/s [2024-11-21T01:41:04.728Z] 81625.60 IOPS, 318.85 MiB/s 00:15:20.771 Latency(us) 00:15:20.771 [2024-11-21T01:41:04.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.771 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:20.771 xnvme_bdev : 5.00 81599.72 318.75 0.00 0.00 780.94 335.56 3213.78 00:15:20.771 [2024-11-21T01:41:04.728Z] =================================================================================================================== 00:15:20.771 [2024-11-21T01:41:04.728Z] Total : 81599.72 318.75 0.00 0.00 780.94 335.56 3213.78 00:15:21.344 01:41:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:21.344 01:41:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:21.344 01:41:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:21.344 01:41:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:21.344 01:41:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:21.344 { 00:15:21.344 "subsystems": [ 00:15:21.344 { 00:15:21.344 "subsystem": "bdev", 00:15:21.344 "config": [ 00:15:21.344 { 00:15:21.344 "params": { 00:15:21.344 "io_mechanism": "io_uring_cmd", 00:15:21.344 "conserve_cpu": true, 00:15:21.344 "filename": "/dev/ng0n1", 00:15:21.344 "name": "xnvme_bdev" 00:15:21.344 }, 00:15:21.344 "method": "bdev_xnvme_create" 00:15:21.344 }, 00:15:21.344 { 00:15:21.344 "method": "bdev_wait_for_examine" 00:15:21.344 } 00:15:21.344 ] 00:15:21.344 } 00:15:21.344 ] 00:15:21.344 } 00:15:21.344 [2024-11-21 01:41:05.087984] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:15:21.344 [2024-11-21 01:41:05.088102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71552 ] 00:15:21.344 [2024-11-21 01:41:05.243501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.605 [2024-11-21 01:41:05.330728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.605 Running I/O for 5 seconds... 00:15:23.927 52634.00 IOPS, 205.60 MiB/s [2024-11-21T01:41:08.827Z] 53567.50 IOPS, 209.25 MiB/s [2024-11-21T01:41:09.770Z] 49517.00 IOPS, 193.43 MiB/s [2024-11-21T01:41:10.712Z] 44405.75 IOPS, 173.46 MiB/s [2024-11-21T01:41:10.712Z] 41001.60 IOPS, 160.16 MiB/s 00:15:26.755 Latency(us) 00:15:26.755 [2024-11-21T01:41:10.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.755 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:26.755 xnvme_bdev : 5.01 40951.16 159.97 0.00 0.00 1556.86 43.72 23895.43 00:15:26.755 [2024-11-21T01:41:10.712Z] =================================================================================================================== 00:15:26.755 [2024-11-21T01:41:10.712Z] Total : 40951.16 159.97 0.00 0.00 1556.86 43.72 23895.43 00:15:27.700 ************************************ 00:15:27.700 END TEST xnvme_bdevperf 00:15:27.700 ************************************ 00:15:27.700 00:15:27.700 real 0m25.515s 00:15:27.700 user 0m16.404s 00:15:27.700 sys 0m7.056s 00:15:27.700 01:41:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.700 01:41:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:27.700 01:41:11 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:27.700 01:41:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:27.700 01:41:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.700 01:41:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.700 ************************************ 00:15:27.700 START TEST xnvme_fio_plugin 00:15:27.700 ************************************ 00:15:27.700 01:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:27.700 01:41:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:27.700 01:41:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:27.701 01:41:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:27.701 { 00:15:27.701 "subsystems": [ 00:15:27.701 { 00:15:27.701 "subsystem": "bdev", 00:15:27.701 "config": [ 00:15:27.701 { 00:15:27.701 "params": { 00:15:27.701 "io_mechanism": "io_uring_cmd", 00:15:27.701 "conserve_cpu": true, 00:15:27.701 "filename": "/dev/ng0n1", 00:15:27.701 "name": "xnvme_bdev" 00:15:27.701 }, 00:15:27.701 "method": "bdev_xnvme_create" 00:15:27.701 }, 00:15:27.701 { 00:15:27.701 "method": "bdev_wait_for_examine" 00:15:27.701 } 00:15:27.701 ] 00:15:27.701 } 00:15:27.701 ] 00:15:27.701 } 00:15:27.701 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:27.701 fio-3.35 00:15:27.701 Starting 1 thread 00:15:34.296 00:15:34.296 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71665: Thu Nov 21 01:41:17 2024 00:15:34.296 read: IOPS=36.4k, BW=142MiB/s (149MB/s)(711MiB/5001msec) 00:15:34.296 slat (usec): min=2, max=282, avg= 3.85, stdev= 2.90 00:15:34.296 clat (usec): min=1019, max=3763, avg=1601.49, stdev=236.36 00:15:34.296 lat (usec): min=1023, max=3793, avg=1605.34, stdev=236.90 00:15:34.296 clat percentiles (usec): 00:15:34.296 | 1.00th=[ 1205], 5.00th=[ 1303], 10.00th=[ 1352], 20.00th=[ 1418], 00:15:34.296 | 30.00th=[ 1467], 40.00th=[ 1516], 50.00th=[ 1565], 60.00th=[ 1614], 00:15:34.296 | 70.00th=[ 1680], 80.00th=[ 1762], 90.00th=[ 1909], 95.00th=[ 2040], 00:15:34.296 | 99.00th=[ 2343], 99.50th=[ 2507], 99.90th=[ 2868], 99.95th=[ 2966], 00:15:34.296 | 99.99th=[ 3458] 00:15:34.296 bw ( KiB/s): min=138496, max=150528, per=100.00%, avg=145631.78, stdev=3700.88, samples=9 00:15:34.296 iops : min=34624, max=37632, avg=36407.89, stdev=925.24, samples=9 00:15:34.296 lat (msec) : 2=93.59%, 4=6.41% 00:15:34.296 cpu : usr=42.26%, sys=53.82%, ctx=56, majf=0, minf=762 00:15:34.296 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:34.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:34.296 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:34.296 issued rwts: total=181920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:34.296 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:34.296 00:15:34.296 Run status group 0 (all jobs): 00:15:34.296 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=711MiB (745MB), run=5001-5001msec 00:15:34.296 ----------------------------------------------------- 00:15:34.296 Suppressions used: 00:15:34.296 count bytes template 00:15:34.296 1 11 /usr/src/fio/parse.c 00:15:34.296 1 8 libtcmalloc_minimal.so 00:15:34.296 1 904 libcrypto.so 00:15:34.296 ----------------------------------------------------- 00:15:34.296 00:15:34.557 01:41:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:34.557 01:41:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.558 01:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.558 01:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:34.558 01:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:34.558 01:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:34.558 01:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:34.558 01:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:34.558 01:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:34.558 01:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:34.558 01:41:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:34.558 01:41:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:34.558 01:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:34.558 01:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:34.558 01:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:34.558 01:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:34.558 01:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:34.558 01:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:34.558 01:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:34.558 01:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:34.558 01:41:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.558 { 00:15:34.558 "subsystems": [ 00:15:34.558 { 00:15:34.558 "subsystem": "bdev", 00:15:34.558 "config": [ 00:15:34.558 { 00:15:34.558 "params": { 00:15:34.558 "io_mechanism": "io_uring_cmd", 00:15:34.558 "conserve_cpu": true, 00:15:34.558 "filename": "/dev/ng0n1", 00:15:34.558 "name": "xnvme_bdev" 00:15:34.558 }, 00:15:34.558 "method": "bdev_xnvme_create" 00:15:34.558 }, 00:15:34.558 { 00:15:34.558 "method": "bdev_wait_for_examine" 00:15:34.558 } 00:15:34.558 ] 00:15:34.558 } 00:15:34.558 ] 00:15:34.558 } 00:15:34.558 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:34.558 fio-3.35 00:15:34.558 Starting 1 thread 00:15:41.142 00:15:41.142 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71756: Thu Nov 21 01:41:24 2024 00:15:41.142 write: IOPS=32.3k, BW=126MiB/s (132MB/s)(633MiB/5015msec); 0 zone resets 00:15:41.142 slat (usec): min=2, max=283, avg= 4.13, stdev= 3.24 00:15:41.142 clat (usec): min=69, max=34709, avg=1818.26, stdev=2015.49 00:15:41.142 lat (usec): min=72, max=34714, avg=1822.40, stdev=2015.59 00:15:41.143 clat percentiles (usec): 00:15:41.143 | 1.00th=[ 701], 5.00th=[ 1205], 10.00th=[ 1287], 20.00th=[ 1369], 00:15:41.143 | 30.00th=[ 1434], 40.00th=[ 1500], 50.00th=[ 1565], 60.00th=[ 1631], 00:15:41.143 | 70.00th=[ 1696], 80.00th=[ 1795], 90.00th=[ 1942], 95.00th=[ 2114], 00:15:41.143 | 99.00th=[15664], 99.50th=[19006], 99.90th=[22676], 99.95th=[24249], 00:15:41.143 | 99.99th=[29492] 00:15:41.143 bw ( KiB/s): min=27744, max=149032, per=100.00%, avg=129510.40, stdev=38207.79, samples=10 00:15:41.143 iops : min= 6936, max=37258, avg=32377.60, stdev=9551.95, samples=10 00:15:41.143 lat (usec) : 100=0.01%, 250=0.12%, 500=0.48%, 750=0.56%, 1000=0.38% 00:15:41.143 lat (msec) : 2=90.45%, 4=6.45%, 10=0.05%, 20=1.15%, 50=0.36% 00:15:41.143 cpu : usr=53.11%, sys=41.58%, ctx=67, majf=0, minf=762 00:15:41.143 IO depths : 1=1.4%, 2=2.9%, 4=5.9%, 8=12.0%, 16=24.2%, 32=51.2%, >=64=2.3% 00:15:41.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.143 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:41.143 issued rwts: total=0,161951,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.143 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:41.143 00:15:41.143 Run status group 0 (all jobs): 00:15:41.143 WRITE: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=633MiB (663MB), run=5015-5015msec 00:15:41.143 ----------------------------------------------------- 00:15:41.143 Suppressions used: 00:15:41.143 count bytes template 00:15:41.143 1 11 /usr/src/fio/parse.c 00:15:41.143 1 8 libtcmalloc_minimal.so 00:15:41.143 1 904 libcrypto.so 00:15:41.143 ----------------------------------------------------- 00:15:41.143 00:15:41.401 ************************************ 00:15:41.401 END TEST xnvme_fio_plugin 00:15:41.401 ************************************ 00:15:41.401 00:15:41.401 real 0m13.741s 00:15:41.401 user 0m7.589s 00:15:41.401 sys 0m5.362s 00:15:41.402 01:41:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.402 01:41:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:41.402 01:41:25 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71250 00:15:41.402 01:41:25 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71250 ']' 00:15:41.402 01:41:25 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71250 00:15:41.402 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71250) - No such process 00:15:41.402 Process with pid 71250 is not found 00:15:41.402 01:41:25 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71250 is not found' 00:15:41.402 ************************************ 00:15:41.402 END TEST nvme_xnvme 00:15:41.402 ************************************ 00:15:41.402 00:15:41.402 real 3m29.823s 00:15:41.402 user 1m56.203s 00:15:41.402 sys 1m18.840s 00:15:41.402 01:41:25 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.402 01:41:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:41.402 01:41:25 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:41.402 01:41:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:41.402 01:41:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.402 01:41:25 -- common/autotest_common.sh@10 -- # set +x 00:15:41.402 ************************************ 00:15:41.402 START TEST blockdev_xnvme 00:15:41.402 ************************************ 00:15:41.402 01:41:25 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:41.402 * Looking for test storage... 00:15:41.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:41.402 01:41:25 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:41.402 01:41:25 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:41.402 01:41:25 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:41.402 01:41:25 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:41.402 01:41:25 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.402 01:41:25 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.402 01:41:25 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.402 01:41:25 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.402 01:41:25 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.402 01:41:25 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.402 01:41:25 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.402 01:41:25 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.402 01:41:25 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.402 01:41:25 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.402 01:41:25 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.402 01:41:25 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:41.402 01:41:25 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:41.402 01:41:25 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.402 01:41:25 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.662 01:41:25 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:41.662 01:41:25 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:41.662 01:41:25 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.662 01:41:25 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:41.662 01:41:25 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.662 01:41:25 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:41.662 01:41:25 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:41.662 01:41:25 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.662 01:41:25 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:41.662 01:41:25 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.662 01:41:25 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.662 01:41:25 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.662 01:41:25 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:41.662 01:41:25 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.662 01:41:25 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:41.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.662 --rc genhtml_branch_coverage=1 00:15:41.662 --rc genhtml_function_coverage=1 00:15:41.662 --rc genhtml_legend=1 00:15:41.662 --rc geninfo_all_blocks=1 00:15:41.662 --rc geninfo_unexecuted_blocks=1 00:15:41.662 00:15:41.662 ' 00:15:41.662 01:41:25 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:41.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.662 --rc genhtml_branch_coverage=1 00:15:41.662 --rc genhtml_function_coverage=1 00:15:41.662 --rc genhtml_legend=1 00:15:41.662 --rc geninfo_all_blocks=1 00:15:41.662 --rc geninfo_unexecuted_blocks=1 00:15:41.662 00:15:41.662 ' 00:15:41.662 01:41:25 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:41.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.662 --rc genhtml_branch_coverage=1 00:15:41.662 --rc genhtml_function_coverage=1 00:15:41.662 --rc genhtml_legend=1 00:15:41.662 --rc geninfo_all_blocks=1 00:15:41.662 --rc geninfo_unexecuted_blocks=1 00:15:41.662 00:15:41.662 ' 00:15:41.662 01:41:25 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:41.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.662 --rc genhtml_branch_coverage=1 00:15:41.662 --rc genhtml_function_coverage=1 00:15:41.662 --rc genhtml_legend=1 00:15:41.662 --rc geninfo_all_blocks=1 00:15:41.662 --rc geninfo_unexecuted_blocks=1 00:15:41.662 00:15:41.662 ' 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71890 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71890 00:15:41.662 01:41:25 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 71890 ']' 00:15:41.662 01:41:25 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.662 01:41:25 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.662 01:41:25 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.662 01:41:25 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.662 01:41:25 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:41.662 01:41:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:41.662 [2024-11-21 01:41:25.453898] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:15:41.662 [2024-11-21 01:41:25.454195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71890 ] 00:15:41.662 [2024-11-21 01:41:25.610822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.922 [2024-11-21 01:41:25.703971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.490 01:41:26 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.490 01:41:26 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:42.490 01:41:26 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:15:42.490 01:41:26 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:15:42.490 01:41:26 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:42.490 01:41:26 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:42.490 01:41:26 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:43.061 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:43.322 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:43.322 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:43.322 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:43.322 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:43.322 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:43.322 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:43.322 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:43.322 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:15:43.322 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.322 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:15:43.322 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:43.322 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:43.322 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.322 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.322 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:15:43.322 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:43.322 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:43.322 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.322 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.322 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:15:43.322 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:43.323 01:41:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:43.323 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:43.584 01:41:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.584 01:41:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.584 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:43.584 nvme0n1 00:15:43.584 nvme0n2 00:15:43.584 nvme0n3 00:15:43.584 nvme1n1 00:15:43.584 nvme2n1 00:15:43.584 nvme3n1 00:15:43.584 01:41:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.584 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:15:43.584 01:41:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.584 01:41:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.584 01:41:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.584 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:15:43.584 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:15:43.584 01:41:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.584 01:41:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.584 01:41:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.584 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:15:43.584 01:41:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.584 01:41:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.584 01:41:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.584 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:43.584 01:41:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.584 01:41:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.584 01:41:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.584 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:15:43.584 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:15:43.584 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:15:43.584 01:41:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.584 01:41:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.584 01:41:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.584 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:15:43.584 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:15:43.585 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "f0b3ef26-bea3-4abf-9481-a24cf7d2e9c4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f0b3ef26-bea3-4abf-9481-a24cf7d2e9c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "fe0c56af-9aff-4048-912c-72fc028b6a44"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fe0c56af-9aff-4048-912c-72fc028b6a44",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "d842bbc0-774a-49f7-8805-1f36691ce186"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d842bbc0-774a-49f7-8805-1f36691ce186",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "38450bde-18c6-4176-aac1-52730b2eead6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "38450bde-18c6-4176-aac1-52730b2eead6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "b1f55aa6-ee37-4b56-9486-3c68a4eaf5b5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b1f55aa6-ee37-4b56-9486-3c68a4eaf5b5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "c4d0eb36-58cf-4cef-a891-4fbe59b2a922"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c4d0eb36-58cf-4cef-a891-4fbe59b2a922",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:43.585 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:15:43.585 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:15:43.585 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:15:43.585 01:41:27 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 71890 00:15:43.585 01:41:27 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71890 ']' 00:15:43.585 01:41:27 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 71890 00:15:43.585 01:41:27 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:43.585 01:41:27 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:43.585 01:41:27 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71890 00:15:43.585 killing process with pid 71890 00:15:43.585 01:41:27 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:43.585 01:41:27 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:43.585 01:41:27 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71890' 00:15:43.585 01:41:27 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 71890 00:15:43.585 01:41:27 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 71890 00:15:45.500 01:41:29 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:45.500 01:41:29 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:45.500 01:41:29 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:45.500 01:41:29 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.500 01:41:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:45.500 ************************************ 00:15:45.500 START TEST bdev_hello_world 00:15:45.500 ************************************ 00:15:45.500 01:41:29 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:45.500 [2024-11-21 01:41:29.211253] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:15:45.500 [2024-11-21 01:41:29.211395] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72163 ] 00:15:45.500 [2024-11-21 01:41:29.374586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.761 [2024-11-21 01:41:29.493681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.023 [2024-11-21 01:41:29.891743] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:46.023 [2024-11-21 01:41:29.891804] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:46.023 [2024-11-21 01:41:29.891822] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:46.023 [2024-11-21 01:41:29.893926] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:46.023 [2024-11-21 01:41:29.894820] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:46.023 [2024-11-21 01:41:29.895034] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:46.023 [2024-11-21 01:41:29.895457] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:46.023 00:15:46.023 [2024-11-21 01:41:29.895489] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:46.967 00:15:46.967 real 0m1.532s 00:15:46.967 user 0m1.164s 00:15:46.967 sys 0m0.217s 00:15:46.967 01:41:30 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.967 01:41:30 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:46.967 ************************************ 00:15:46.967 END TEST bdev_hello_world 00:15:46.967 ************************************ 00:15:46.967 01:41:30 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:15:46.967 01:41:30 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:46.967 01:41:30 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.967 01:41:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:46.967 ************************************ 00:15:46.967 START TEST bdev_bounds 00:15:46.967 ************************************ 00:15:46.967 01:41:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:46.967 Process bdevio pid: 72205 00:15:46.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.967 01:41:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72205 00:15:46.967 01:41:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:46.967 01:41:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72205' 00:15:46.967 01:41:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72205 00:15:46.967 01:41:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72205 ']' 00:15:46.967 01:41:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.967 01:41:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.967 01:41:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:46.967 01:41:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.967 01:41:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.967 01:41:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:46.967 [2024-11-21 01:41:30.818997] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:15:46.967 [2024-11-21 01:41:30.819790] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72205 ] 00:15:47.227 [2024-11-21 01:41:30.982424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:47.227 [2024-11-21 01:41:31.092894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.227 [2024-11-21 01:41:31.093174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.227 [2024-11-21 01:41:31.093192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:47.800 01:41:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:47.800 01:41:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:47.800 01:41:31 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:48.061 I/O targets: 00:15:48.061 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:48.061 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:48.061 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:48.061 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:48.061 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:48.061 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:48.061 00:15:48.061 00:15:48.061 CUnit - A unit testing framework for C - Version 2.1-3 00:15:48.061 http://cunit.sourceforge.net/ 00:15:48.061 00:15:48.061 00:15:48.061 Suite: bdevio tests on: nvme3n1 00:15:48.061 Test: blockdev write read block ...passed 00:15:48.061 Test: blockdev write zeroes read block ...passed 00:15:48.061 Test: blockdev write zeroes read no split ...passed 00:15:48.061 Test: blockdev write zeroes read split ...passed 00:15:48.061 Test: blockdev write zeroes read split partial ...passed 00:15:48.061 Test: blockdev reset ...passed 00:15:48.061 Test: blockdev write read 8 blocks ...passed 00:15:48.061 Test: blockdev write read size > 128k ...passed 00:15:48.061 Test: blockdev write read invalid size ...passed 00:15:48.061 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:48.061 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:48.061 Test: blockdev write read max offset ...passed 00:15:48.061 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:48.061 Test: blockdev writev readv 8 blocks ...passed 00:15:48.061 Test: blockdev writev readv 30 x 1block ...passed 00:15:48.061 Test: blockdev writev readv block ...passed 00:15:48.061 Test: blockdev writev readv size > 128k ...passed 00:15:48.061 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:48.061 Test: blockdev comparev and writev ...passed 00:15:48.061 Test: blockdev nvme passthru rw ...passed 00:15:48.061 Test: blockdev nvme passthru vendor specific ...passed 00:15:48.061 Test: blockdev nvme admin passthru ...passed 00:15:48.061 Test: blockdev copy ...passed 00:15:48.061 Suite: bdevio tests on: nvme2n1 00:15:48.061 Test: blockdev write read block ...passed 00:15:48.061 Test: blockdev write zeroes read block ...passed 00:15:48.061 Test: blockdev write zeroes read no split ...passed 00:15:48.061 Test: blockdev write zeroes read split ...passed 00:15:48.061 Test: blockdev write zeroes read split partial ...passed 00:15:48.061 Test: blockdev reset ...passed 00:15:48.061 Test: blockdev write read 8 blocks ...passed 00:15:48.061 Test: blockdev write read size > 128k ...passed 00:15:48.061 Test: blockdev write read invalid size ...passed 00:15:48.061 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:48.061 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:48.061 Test: blockdev write read max offset ...passed 00:15:48.061 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:48.061 Test: blockdev writev readv 8 blocks ...passed 00:15:48.061 Test: blockdev writev readv 30 x 1block ...passed 00:15:48.061 Test: blockdev writev readv block ...passed 00:15:48.061 Test: blockdev writev readv size > 128k ...passed 00:15:48.061 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:48.061 Test: blockdev comparev and writev ...passed 00:15:48.061 Test: blockdev nvme passthru rw ...passed 00:15:48.061 Test: blockdev nvme passthru vendor specific ...passed 00:15:48.061 Test: blockdev nvme admin passthru ...passed 00:15:48.061 Test: blockdev copy ...passed 00:15:48.061 Suite: bdevio tests on: nvme1n1 00:15:48.061 Test: blockdev write read block ...passed 00:15:48.061 Test: blockdev write zeroes read block ...passed 00:15:48.061 Test: blockdev write zeroes read no split ...passed 00:15:48.061 Test: blockdev write zeroes read split ...passed 00:15:48.061 Test: blockdev write zeroes read split partial ...passed 00:15:48.061 Test: blockdev reset ...passed 00:15:48.061 Test: blockdev write read 8 blocks ...passed 00:15:48.061 Test: blockdev write read size > 128k ...passed 00:15:48.061 Test: blockdev write read invalid size ...passed 00:15:48.061 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:48.061 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:48.061 Test: blockdev write read max offset ...passed 00:15:48.061 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:48.061 Test: blockdev writev readv 8 blocks ...passed 00:15:48.061 Test: blockdev writev readv 30 x 1block ...passed 00:15:48.061 Test: blockdev writev readv block ...passed 00:15:48.061 Test: blockdev writev readv size > 128k ...passed 00:15:48.061 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:48.324 Test: blockdev comparev and writev ...passed 00:15:48.324 Test: blockdev nvme passthru rw ...passed 00:15:48.324 Test: blockdev nvme passthru vendor specific ...passed 00:15:48.324 Test: blockdev nvme admin passthru ...passed 00:15:48.324 Test: blockdev copy ...passed 00:15:48.324 Suite: bdevio tests on: nvme0n3 00:15:48.324 Test: blockdev write read block ...passed 00:15:48.324 Test: blockdev write zeroes read block ...passed 00:15:48.324 Test: blockdev write zeroes read no split ...passed 00:15:48.324 Test: blockdev write zeroes read split ...passed 00:15:48.324 Test: blockdev write zeroes read split partial ...passed 00:15:48.324 Test: blockdev reset ...passed 00:15:48.324 Test: blockdev write read 8 blocks ...passed 00:15:48.324 Test: blockdev write read size > 128k ...passed 00:15:48.324 Test: blockdev write read invalid size ...passed 00:15:48.324 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:48.324 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:48.324 Test: blockdev write read max offset ...passed 00:15:48.324 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:48.324 Test: blockdev writev readv 8 blocks ...passed 00:15:48.324 Test: blockdev writev readv 30 x 1block ...passed 00:15:48.324 Test: blockdev writev readv block ...passed 00:15:48.324 Test: blockdev writev readv size > 128k ...passed 00:15:48.324 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:48.324 Test: blockdev comparev and writev ...passed 00:15:48.324 Test: blockdev nvme passthru rw ...passed 00:15:48.324 Test: blockdev nvme passthru vendor specific ...passed 00:15:48.324 Test: blockdev nvme admin passthru ...passed 00:15:48.324 Test: blockdev copy ...passed 00:15:48.324 Suite: bdevio tests on: nvme0n2 00:15:48.324 Test: blockdev write read block ...passed 00:15:48.324 Test: blockdev write zeroes read block ...passed 00:15:48.324 Test: blockdev write zeroes read no split ...passed 00:15:48.324 Test: blockdev write zeroes read split ...passed 00:15:48.324 Test: blockdev write zeroes read split partial ...passed 00:15:48.324 Test: blockdev reset ...passed 00:15:48.324 Test: blockdev write read 8 blocks ...passed 00:15:48.324 Test: blockdev write read size > 128k ...passed 00:15:48.324 Test: blockdev write read invalid size ...passed 00:15:48.324 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:48.324 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:48.324 Test: blockdev write read max offset ...passed 00:15:48.324 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:48.324 Test: blockdev writev readv 8 blocks ...passed 00:15:48.324 Test: blockdev writev readv 30 x 1block ...passed 00:15:48.324 Test: blockdev writev readv block ...passed 00:15:48.324 Test: blockdev writev readv size > 128k ...passed 00:15:48.324 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:48.324 Test: blockdev comparev and writev ...passed 00:15:48.324 Test: blockdev nvme passthru rw ...passed 00:15:48.324 Test: blockdev nvme passthru vendor specific ...passed 00:15:48.324 Test: blockdev nvme admin passthru ...passed 00:15:48.324 Test: blockdev copy ...passed 00:15:48.324 Suite: bdevio tests on: nvme0n1 00:15:48.324 Test: blockdev write read block ...passed 00:15:48.324 Test: blockdev write zeroes read block ...passed 00:15:48.324 Test: blockdev write zeroes read no split ...passed 00:15:48.324 Test: blockdev write zeroes read split ...passed 00:15:48.324 Test: blockdev write zeroes read split partial ...passed 00:15:48.324 Test: blockdev reset ...passed 00:15:48.324 Test: blockdev write read 8 blocks ...passed 00:15:48.324 Test: blockdev write read size > 128k ...passed 00:15:48.324 Test: blockdev write read invalid size ...passed 00:15:48.324 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:48.324 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:48.324 Test: blockdev write read max offset ...passed 00:15:48.324 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:48.324 Test: blockdev writev readv 8 blocks ...passed 00:15:48.324 Test: blockdev writev readv 30 x 1block ...passed 00:15:48.324 Test: blockdev writev readv block ...passed 00:15:48.324 Test: blockdev writev readv size > 128k ...passed 00:15:48.324 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:48.324 Test: blockdev comparev and writev ...passed 00:15:48.324 Test: blockdev nvme passthru rw ...passed 00:15:48.324 Test: blockdev nvme passthru vendor specific ...passed 00:15:48.324 Test: blockdev nvme admin passthru ...passed 00:15:48.324 Test: blockdev copy ...passed 00:15:48.324 00:15:48.324 Run Summary: Type Total Ran Passed Failed Inactive 00:15:48.324 suites 6 6 n/a 0 0 00:15:48.324 tests 138 138 138 0 0 00:15:48.324 asserts 780 780 780 0 n/a 00:15:48.324 00:15:48.324 Elapsed time = 1.214 seconds 00:15:48.324 0 00:15:48.324 01:41:32 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72205 00:15:48.324 01:41:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72205 ']' 00:15:48.324 01:41:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72205 00:15:48.324 01:41:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:48.324 01:41:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:48.324 01:41:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72205 00:15:48.586 01:41:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:48.586 01:41:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:48.586 01:41:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72205' 00:15:48.586 killing process with pid 72205 00:15:48.586 01:41:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72205 00:15:48.586 01:41:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72205 00:15:49.159 01:41:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:49.159 00:15:49.159 real 0m2.342s 00:15:49.159 user 0m5.712s 00:15:49.159 sys 0m0.360s 00:15:49.159 01:41:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.159 ************************************ 00:15:49.159 END TEST bdev_bounds 00:15:49.159 01:41:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:49.159 ************************************ 00:15:49.422 01:41:33 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:49.422 01:41:33 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:49.422 01:41:33 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.422 01:41:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:49.422 ************************************ 00:15:49.422 START TEST bdev_nbd 00:15:49.422 ************************************ 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72259 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72259 /var/tmp/spdk-nbd.sock 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72259 ']' 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:49.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.422 01:41:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:49.422 [2024-11-21 01:41:33.240041] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:15:49.422 [2024-11-21 01:41:33.240362] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.683 [2024-11-21 01:41:33.403634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.683 [2024-11-21 01:41:33.521890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.256 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.256 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:50.256 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:50.256 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:50.256 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:50.256 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:50.256 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:50.256 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:50.256 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:50.256 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:50.256 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:50.256 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:50.256 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:50.256 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:50.256 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:50.518 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:50.518 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:50.518 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:50.518 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:50.518 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:50.518 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:50.518 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:50.518 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:50.518 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:50.518 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:50.518 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:50.518 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.518 1+0 records in 00:15:50.518 1+0 records out 00:15:50.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000994873 s, 4.1 MB/s 00:15:50.518 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.518 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:50.518 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.518 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:50.518 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:50.518 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:50.518 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:50.518 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:50.780 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:50.780 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:50.780 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:50.780 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:50.780 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:50.780 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:50.780 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:50.780 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:50.780 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:50.780 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:50.780 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:50.780 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.780 1+0 records in 00:15:50.780 1+0 records out 00:15:50.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000939784 s, 4.4 MB/s 00:15:50.780 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.780 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:50.780 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.780 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:50.780 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:50.780 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:50.780 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:50.780 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:51.041 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:51.041 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:51.041 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:51.041 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:51.041 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:51.041 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.041 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.041 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:51.041 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:51.041 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.042 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.042 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.042 1+0 records in 00:15:51.042 1+0 records out 00:15:51.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111565 s, 3.7 MB/s 00:15:51.042 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.042 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:51.042 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.042 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.042 01:41:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:51.042 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:51.042 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:51.042 01:41:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:51.303 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:51.303 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:51.303 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:51.303 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:51.303 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:51.303 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.303 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.303 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:51.303 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:51.303 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.303 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.303 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.303 1+0 records in 00:15:51.303 1+0 records out 00:15:51.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0013954 s, 2.9 MB/s 00:15:51.303 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.303 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:51.303 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.303 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.303 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:51.303 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:51.303 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:51.303 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:51.565 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:51.565 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:51.565 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:51.565 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:51.565 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:51.565 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.565 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.566 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:51.566 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:51.566 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.566 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.566 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.566 1+0 records in 00:15:51.566 1+0 records out 00:15:51.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000843187 s, 4.9 MB/s 00:15:51.566 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.566 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:51.566 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.566 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.566 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:51.566 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:51.566 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:51.566 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:51.827 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:51.827 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:51.827 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:51.827 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:51.827 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:51.827 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.828 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.828 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:51.828 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:51.828 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.828 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.828 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.828 1+0 records in 00:15:51.828 1+0 records out 00:15:51.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010598 s, 3.9 MB/s 00:15:51.828 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.828 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:51.828 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.828 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.828 01:41:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:51.828 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:51.828 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:51.828 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:52.089 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:52.089 { 00:15:52.089 "nbd_device": "/dev/nbd0", 00:15:52.089 "bdev_name": "nvme0n1" 00:15:52.089 }, 00:15:52.089 { 00:15:52.089 "nbd_device": "/dev/nbd1", 00:15:52.089 "bdev_name": "nvme0n2" 00:15:52.089 }, 00:15:52.089 { 00:15:52.089 "nbd_device": "/dev/nbd2", 00:15:52.089 "bdev_name": "nvme0n3" 00:15:52.089 }, 00:15:52.089 { 00:15:52.089 "nbd_device": "/dev/nbd3", 00:15:52.089 "bdev_name": "nvme1n1" 00:15:52.089 }, 00:15:52.089 { 00:15:52.089 "nbd_device": "/dev/nbd4", 00:15:52.089 "bdev_name": "nvme2n1" 00:15:52.089 }, 00:15:52.089 { 00:15:52.089 "nbd_device": "/dev/nbd5", 00:15:52.089 "bdev_name": "nvme3n1" 00:15:52.089 } 00:15:52.089 ]' 00:15:52.089 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:52.089 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:52.089 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:52.089 { 00:15:52.089 "nbd_device": "/dev/nbd0", 00:15:52.089 "bdev_name": "nvme0n1" 00:15:52.089 }, 00:15:52.089 { 00:15:52.089 "nbd_device": "/dev/nbd1", 00:15:52.089 "bdev_name": "nvme0n2" 00:15:52.089 }, 00:15:52.089 { 00:15:52.089 "nbd_device": "/dev/nbd2", 00:15:52.089 "bdev_name": "nvme0n3" 00:15:52.089 }, 00:15:52.089 { 00:15:52.089 "nbd_device": "/dev/nbd3", 00:15:52.089 "bdev_name": "nvme1n1" 00:15:52.089 }, 00:15:52.089 { 00:15:52.089 "nbd_device": "/dev/nbd4", 00:15:52.089 "bdev_name": "nvme2n1" 00:15:52.089 }, 00:15:52.089 { 00:15:52.090 "nbd_device": "/dev/nbd5", 00:15:52.090 "bdev_name": "nvme3n1" 00:15:52.090 } 00:15:52.090 ]' 00:15:52.090 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:52.090 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:52.090 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:52.090 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:52.090 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:52.090 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.090 01:41:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:52.350 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:52.350 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:52.350 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:52.350 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.350 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.350 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:52.350 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:52.350 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.350 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.350 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:52.350 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:52.627 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:52.627 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:52.627 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.627 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.627 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:52.627 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:52.627 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.627 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.627 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:52.627 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:52.627 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:52.627 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:52.627 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.627 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.627 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:52.627 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:52.627 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.627 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.627 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:52.933 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:52.933 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:52.933 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:52.933 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.933 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.933 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:52.933 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:52.933 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.933 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.933 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:53.194 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:53.194 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:53.194 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:53.194 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:53.194 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:53.194 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:53.194 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:53.194 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:53.194 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:53.194 01:41:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:53.454 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:53.454 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:53.454 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:53.454 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:53.454 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:53.454 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:53.454 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:53.454 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:53.454 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:53.454 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:53.454 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:53.715 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:53.715 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:53.716 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:53.977 /dev/nbd0 00:15:53.977 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:53.977 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:53.977 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:53.977 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:53.977 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:53.977 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:53.977 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:53.977 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:53.977 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:53.977 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:53.977 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:53.977 1+0 records in 00:15:53.977 1+0 records out 00:15:53.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000908653 s, 4.5 MB/s 00:15:53.977 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:53.977 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:53.977 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:53.977 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:53.977 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:53.977 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:53.977 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:53.977 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:53.977 /dev/nbd1 00:15:54.239 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:54.239 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:54.239 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:54.239 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:54.239 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:54.239 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:54.239 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:54.239 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:54.239 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:54.239 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:54.239 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:54.239 1+0 records in 00:15:54.239 1+0 records out 00:15:54.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000950869 s, 4.3 MB/s 00:15:54.239 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.239 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:54.239 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.239 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:54.239 01:41:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:54.239 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:54.239 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:54.239 01:41:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:54.239 /dev/nbd10 00:15:54.239 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:54.239 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:54.239 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:54.239 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:54.239 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:54.239 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:54.239 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:54.239 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:54.239 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:54.239 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:54.239 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:54.239 1+0 records in 00:15:54.239 1+0 records out 00:15:54.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00138866 s, 2.9 MB/s 00:15:54.239 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.239 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:54.239 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.239 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:54.239 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:54.239 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:54.240 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:54.240 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:54.501 /dev/nbd11 00:15:54.501 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:54.501 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:54.501 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:54.501 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:54.501 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:54.501 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:54.501 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:54.501 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:54.501 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:54.501 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:54.501 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:54.501 1+0 records in 00:15:54.501 1+0 records out 00:15:54.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00169845 s, 2.4 MB/s 00:15:54.501 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.501 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:54.501 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.501 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:54.501 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:54.501 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:54.501 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:54.501 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:54.764 /dev/nbd12 00:15:54.764 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:54.764 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:54.764 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:54.764 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:54.764 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:54.764 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:54.764 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:54.764 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:54.764 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:54.764 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:54.764 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:54.764 1+0 records in 00:15:54.764 1+0 records out 00:15:54.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00132382 s, 3.1 MB/s 00:15:54.764 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.764 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:54.764 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:54.764 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:54.764 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:54.764 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:54.764 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:54.764 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:55.025 /dev/nbd13 00:15:55.025 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:55.025 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:55.025 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:55.025 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:55.025 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:55.025 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:55.025 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:55.026 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:55.026 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:55.026 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:55.026 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:55.026 1+0 records in 00:15:55.026 1+0 records out 00:15:55.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129006 s, 3.2 MB/s 00:15:55.026 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.026 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:55.026 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:55.026 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:55.026 01:41:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:55.026 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:55.026 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:55.026 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:55.026 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:55.026 01:41:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:55.287 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:55.287 { 00:15:55.287 "nbd_device": "/dev/nbd0", 00:15:55.287 "bdev_name": "nvme0n1" 00:15:55.287 }, 00:15:55.287 { 00:15:55.287 "nbd_device": "/dev/nbd1", 00:15:55.287 "bdev_name": "nvme0n2" 00:15:55.287 }, 00:15:55.287 { 00:15:55.287 "nbd_device": "/dev/nbd10", 00:15:55.287 "bdev_name": "nvme0n3" 00:15:55.287 }, 00:15:55.287 { 00:15:55.287 "nbd_device": "/dev/nbd11", 00:15:55.287 "bdev_name": "nvme1n1" 00:15:55.287 }, 00:15:55.287 { 00:15:55.287 "nbd_device": "/dev/nbd12", 00:15:55.287 "bdev_name": "nvme2n1" 00:15:55.287 }, 00:15:55.287 { 00:15:55.287 "nbd_device": "/dev/nbd13", 00:15:55.287 "bdev_name": "nvme3n1" 00:15:55.287 } 00:15:55.287 ]' 00:15:55.287 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:55.287 { 00:15:55.287 "nbd_device": "/dev/nbd0", 00:15:55.287 "bdev_name": "nvme0n1" 00:15:55.287 }, 00:15:55.287 { 00:15:55.287 "nbd_device": "/dev/nbd1", 00:15:55.287 "bdev_name": "nvme0n2" 00:15:55.287 }, 00:15:55.287 { 00:15:55.287 "nbd_device": "/dev/nbd10", 00:15:55.287 "bdev_name": "nvme0n3" 00:15:55.287 }, 00:15:55.287 { 00:15:55.287 "nbd_device": "/dev/nbd11", 00:15:55.287 "bdev_name": "nvme1n1" 00:15:55.287 }, 00:15:55.287 { 00:15:55.287 "nbd_device": "/dev/nbd12", 00:15:55.287 "bdev_name": "nvme2n1" 00:15:55.287 }, 00:15:55.287 { 00:15:55.287 "nbd_device": "/dev/nbd13", 00:15:55.287 "bdev_name": "nvme3n1" 00:15:55.287 } 00:15:55.287 ]' 00:15:55.287 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:55.287 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:55.287 /dev/nbd1 00:15:55.287 /dev/nbd10 00:15:55.287 /dev/nbd11 00:15:55.287 /dev/nbd12 00:15:55.287 /dev/nbd13' 00:15:55.287 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:55.287 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:55.287 /dev/nbd1 00:15:55.287 /dev/nbd10 00:15:55.287 /dev/nbd11 00:15:55.288 /dev/nbd12 00:15:55.288 /dev/nbd13' 00:15:55.288 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:55.288 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:55.288 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:55.288 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:55.288 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:55.288 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:55.288 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:55.288 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:55.288 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:55.288 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:55.288 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:55.288 256+0 records in 00:15:55.288 256+0 records out 00:15:55.288 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0065802 s, 159 MB/s 00:15:55.288 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:55.288 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:55.549 256+0 records in 00:15:55.549 256+0 records out 00:15:55.549 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.242499 s, 4.3 MB/s 00:15:55.549 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:55.549 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:55.810 256+0 records in 00:15:55.810 256+0 records out 00:15:55.810 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173199 s, 6.1 MB/s 00:15:55.810 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:55.810 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:56.072 256+0 records in 00:15:56.072 256+0 records out 00:15:56.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.23164 s, 4.5 MB/s 00:15:56.072 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:56.072 01:41:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:56.333 256+0 records in 00:15:56.333 256+0 records out 00:15:56.333 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.318525 s, 3.3 MB/s 00:15:56.333 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:56.333 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:56.594 256+0 records in 00:15:56.594 256+0 records out 00:15:56.594 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1353 s, 7.7 MB/s 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:56.594 256+0 records in 00:15:56.594 256+0 records out 00:15:56.594 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.106881 s, 9.8 MB/s 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:56.594 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:56.856 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:56.856 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:56.856 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:56.856 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:56.856 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:56.856 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:56.856 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:56.856 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:56.856 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:56.856 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:57.117 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:57.117 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:57.117 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:57.117 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.117 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.117 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:57.117 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:57.117 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.117 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.117 01:41:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:57.378 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:57.378 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:57.378 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:57.378 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.378 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.378 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:57.378 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:57.378 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.378 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.378 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:57.638 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:57.638 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:57.638 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:57.638 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.638 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.638 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:57.638 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:57.638 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.638 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.638 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:57.899 01:41:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:58.161 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:58.161 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:58.161 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:58.161 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:58.161 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:58.161 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:58.161 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:58.161 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:58.161 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:58.161 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:58.161 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:58.161 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:58.161 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:58.161 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:58.162 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:58.162 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:58.423 malloc_lvol_verify 00:15:58.423 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:58.684 d68e394e-b593-4fc0-95c8-524a81825564 00:15:58.684 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:58.944 9c10a996-0664-41ee-87fe-d448329094dc 00:15:58.944 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:59.205 /dev/nbd0 00:15:59.205 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:59.205 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:59.205 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:59.205 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:59.205 01:41:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:59.205 mke2fs 1.47.0 (5-Feb-2023) 00:15:59.205 Discarding device blocks: 0/4096 done 00:15:59.205 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:59.205 00:15:59.205 Allocating group tables: 0/1 done 00:15:59.205 Writing inode tables: 0/1 done 00:15:59.205 Creating journal (1024 blocks): done 00:15:59.205 Writing superblocks and filesystem accounting information: 0/1 done 00:15:59.205 00:15:59.205 01:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:59.205 01:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:59.205 01:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:59.205 01:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:59.205 01:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:59.205 01:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.205 01:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:59.466 01:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:59.466 01:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:59.466 01:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:59.466 01:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.466 01:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.466 01:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:59.466 01:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:59.466 01:41:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.466 01:41:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72259 00:15:59.466 01:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72259 ']' 00:15:59.466 01:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72259 00:15:59.466 01:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:59.466 01:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.466 01:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72259 00:15:59.466 01:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:59.466 killing process with pid 72259 00:15:59.466 01:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:59.466 01:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72259' 00:15:59.466 01:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72259 00:15:59.466 01:41:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72259 00:16:00.405 01:41:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:00.405 00:16:00.405 real 0m10.945s 00:16:00.405 user 0m14.761s 00:16:00.405 sys 0m3.749s 00:16:00.405 01:41:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.405 ************************************ 00:16:00.405 END TEST bdev_nbd 00:16:00.405 ************************************ 00:16:00.405 01:41:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:00.405 01:41:44 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:16:00.405 01:41:44 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:16:00.405 01:41:44 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:16:00.405 01:41:44 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:16:00.405 01:41:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:00.405 01:41:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.405 01:41:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:00.405 ************************************ 00:16:00.405 START TEST bdev_fio 00:16:00.405 ************************************ 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:00.405 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:00.405 ************************************ 00:16:00.405 START TEST bdev_fio_rw_verify 00:16:00.405 ************************************ 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:00.405 01:41:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:00.667 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:00.667 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:00.667 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:00.667 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:00.667 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:00.667 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:00.667 fio-3.35 00:16:00.667 Starting 6 threads 00:16:12.921 00:16:12.922 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72672: Thu Nov 21 01:41:55 2024 00:16:12.922 read: IOPS=14.3k, BW=55.7MiB/s (58.4MB/s)(557MiB/10003msec) 00:16:12.922 slat (usec): min=2, max=2387, avg= 6.48, stdev=15.91 00:16:12.922 clat (usec): min=91, max=7892, avg=1365.83, stdev=738.80 00:16:12.922 lat (usec): min=94, max=7899, avg=1372.31, stdev=739.48 00:16:12.922 clat percentiles (usec): 00:16:12.922 | 50.000th=[ 1270], 99.000th=[ 3654], 99.900th=[ 4752], 99.990th=[ 5997], 00:16:12.922 | 99.999th=[ 6849] 00:16:12.922 write: IOPS=14.6k, BW=57.2MiB/s (59.9MB/s)(572MiB/10003msec); 0 zone resets 00:16:12.922 slat (usec): min=12, max=5511, avg=41.43, stdev=141.74 00:16:12.922 clat (usec): min=85, max=15494, avg=1623.19, stdev=823.39 00:16:12.922 lat (usec): min=102, max=15528, avg=1664.63, stdev=835.95 00:16:12.922 clat percentiles (usec): 00:16:12.922 | 50.000th=[ 1500], 99.000th=[ 4228], 99.900th=[ 5932], 99.990th=[10290], 00:16:12.922 | 99.999th=[15533] 00:16:12.922 bw ( KiB/s): min=48884, max=78534, per=100.00%, avg=58760.00, stdev=1416.75, samples=114 00:16:12.922 iops : min=12218, max=19633, avg=14688.95, stdev=354.25, samples=114 00:16:12.922 lat (usec) : 100=0.01%, 250=1.65%, 500=5.45%, 750=8.66%, 1000=11.84% 00:16:12.922 lat (msec) : 2=50.73%, 4=20.70%, 10=0.94%, 20=0.01% 00:16:12.922 cpu : usr=43.86%, sys=32.37%, ctx=5579, majf=0, minf=14609 00:16:12.922 IO depths : 1=11.4%, 2=23.8%, 4=51.1%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.922 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.922 issued rwts: total=142592,146354,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.922 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:12.922 00:16:12.922 Run status group 0 (all jobs): 00:16:12.922 READ: bw=55.7MiB/s (58.4MB/s), 55.7MiB/s-55.7MiB/s (58.4MB/s-58.4MB/s), io=557MiB (584MB), run=10003-10003msec 00:16:12.922 WRITE: bw=57.2MiB/s (59.9MB/s), 57.2MiB/s-57.2MiB/s (59.9MB/s-59.9MB/s), io=572MiB (599MB), run=10003-10003msec 00:16:12.922 ----------------------------------------------------- 00:16:12.922 Suppressions used: 00:16:12.922 count bytes template 00:16:12.922 6 48 /usr/src/fio/parse.c 00:16:12.922 3671 352416 /usr/src/fio/iolog.c 00:16:12.922 1 8 libtcmalloc_minimal.so 00:16:12.922 1 904 libcrypto.so 00:16:12.922 ----------------------------------------------------- 00:16:12.922 00:16:12.922 00:16:12.922 real 0m12.046s 00:16:12.922 user 0m27.837s 00:16:12.922 sys 0m19.810s 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:12.922 ************************************ 00:16:12.922 END TEST bdev_fio_rw_verify 00:16:12.922 ************************************ 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "f0b3ef26-bea3-4abf-9481-a24cf7d2e9c4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f0b3ef26-bea3-4abf-9481-a24cf7d2e9c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "fe0c56af-9aff-4048-912c-72fc028b6a44"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fe0c56af-9aff-4048-912c-72fc028b6a44",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "d842bbc0-774a-49f7-8805-1f36691ce186"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d842bbc0-774a-49f7-8805-1f36691ce186",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "38450bde-18c6-4176-aac1-52730b2eead6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "38450bde-18c6-4176-aac1-52730b2eead6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "ali 01:41:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:12.922 ases": [' ' "b1f55aa6-ee37-4b56-9486-3c68a4eaf5b5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b1f55aa6-ee37-4b56-9486-3c68a4eaf5b5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "c4d0eb36-58cf-4cef-a891-4fbe59b2a922"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "c4d0eb36-58cf-4cef-a891-4fbe59b2a922",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:12.922 /home/vagrant/spdk_repo/spdk 00:16:12.922 ************************************ 00:16:12.922 END TEST bdev_fio 00:16:12.922 ************************************ 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:12.922 00:16:12.922 real 0m12.222s 00:16:12.922 user 0m27.911s 00:16:12.922 sys 0m19.891s 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.922 01:41:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:12.922 01:41:56 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:12.922 01:41:56 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:12.922 01:41:56 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:12.922 01:41:56 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.922 01:41:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:12.922 ************************************ 00:16:12.922 START TEST bdev_verify 00:16:12.922 ************************************ 00:16:12.922 01:41:56 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:12.923 [2024-11-21 01:41:56.539148] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:16:12.923 [2024-11-21 01:41:56.539492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72842 ] 00:16:12.923 [2024-11-21 01:41:56.702915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:12.923 [2024-11-21 01:41:56.823154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.923 [2024-11-21 01:41:56.823250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.496 Running I/O for 5 seconds... 00:16:15.829 25088.00 IOPS, 98.00 MiB/s [2024-11-21T01:42:00.730Z] 24032.00 IOPS, 93.88 MiB/s [2024-11-21T01:42:01.674Z] 23669.33 IOPS, 92.46 MiB/s [2024-11-21T01:42:02.618Z] 23440.00 IOPS, 91.56 MiB/s [2024-11-21T01:42:02.618Z] 23257.60 IOPS, 90.85 MiB/s 00:16:18.661 Latency(us) 00:16:18.661 [2024-11-21T01:42:02.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.661 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:18.661 Verification LBA range: start 0x0 length 0x80000 00:16:18.661 nvme0n1 : 5.04 1829.92 7.15 0.00 0.00 69827.50 7914.73 70577.23 00:16:18.661 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:18.661 Verification LBA range: start 0x80000 length 0x80000 00:16:18.661 nvme0n1 : 5.03 1856.26 7.25 0.00 0.00 68831.83 10687.41 75416.81 00:16:18.661 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:18.661 Verification LBA range: start 0x0 length 0x80000 00:16:18.661 nvme0n2 : 5.03 1831.18 7.15 0.00 0.00 69634.43 12401.43 58881.58 00:16:18.661 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:18.661 Verification LBA range: start 0x80000 length 0x80000 00:16:18.661 nvme0n2 : 5.04 1852.70 7.24 0.00 0.00 68836.82 10737.82 70980.53 00:16:18.661 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:18.661 Verification LBA range: start 0x0 length 0x80000 00:16:18.661 nvme0n3 : 5.04 1829.19 7.15 0.00 0.00 69603.13 9124.63 67754.14 00:16:18.661 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:18.661 Verification LBA range: start 0x80000 length 0x80000 00:16:18.661 nvme0n3 : 5.04 1855.59 7.25 0.00 0.00 68594.88 12552.66 66544.25 00:16:18.661 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:18.661 Verification LBA range: start 0x0 length 0xbd0bd 00:16:18.661 nvme1n1 : 5.05 2256.86 8.82 0.00 0.00 56194.63 5343.70 67754.14 00:16:18.661 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:18.661 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:18.661 nvme1n1 : 5.06 2290.54 8.95 0.00 0.00 55390.88 4713.55 69367.34 00:16:18.661 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:18.661 Verification LBA range: start 0x0 length 0xa0000 00:16:18.661 nvme2n1 : 5.06 1845.96 7.21 0.00 0.00 68763.92 2571.03 64527.75 00:16:18.661 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:18.661 Verification LBA range: start 0xa0000 length 0xa0000 00:16:18.661 nvme2n1 : 5.06 1847.07 7.22 0.00 0.00 68544.55 11594.83 94371.84 00:16:18.661 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:18.661 Verification LBA range: start 0x0 length 0x20000 00:16:18.661 nvme3n1 : 5.07 1870.01 7.30 0.00 0.00 67761.09 3680.10 62914.56 00:16:18.661 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:18.661 Verification LBA range: start 0x20000 length 0x20000 00:16:18.661 nvme3n1 : 5.06 1870.35 7.31 0.00 0.00 67637.32 3730.51 74610.22 00:16:18.661 [2024-11-21T01:42:02.618Z] =================================================================================================================== 00:16:18.661 [2024-11-21T01:42:02.618Z] Total : 23035.62 89.98 0.00 0.00 66226.88 2571.03 94371.84 00:16:19.233 00:16:19.233 real 0m6.696s 00:16:19.233 user 0m10.829s 00:16:19.233 sys 0m1.456s 00:16:19.233 01:42:03 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:19.233 01:42:03 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:19.233 ************************************ 00:16:19.233 END TEST bdev_verify 00:16:19.233 ************************************ 00:16:19.493 01:42:03 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:19.493 01:42:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:19.493 01:42:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:19.493 01:42:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:19.493 ************************************ 00:16:19.493 START TEST bdev_verify_big_io 00:16:19.493 ************************************ 00:16:19.494 01:42:03 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:19.494 [2024-11-21 01:42:03.309440] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:16:19.494 [2024-11-21 01:42:03.309591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72941 ] 00:16:19.753 [2024-11-21 01:42:03.473409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:19.753 [2024-11-21 01:42:03.599119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.753 [2024-11-21 01:42:03.599217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.324 Running I/O for 5 seconds... 00:16:25.541 1200.00 IOPS, 75.00 MiB/s [2024-11-21T01:42:10.442Z] 2581.50 IOPS, 161.34 MiB/s [2024-11-21T01:42:10.442Z] 3294.33 IOPS, 205.90 MiB/s 00:16:26.485 Latency(us) 00:16:26.485 [2024-11-21T01:42:10.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.485 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:26.485 Verification LBA range: start 0x0 length 0x8000 00:16:26.485 nvme0n1 : 5.76 125.05 7.82 0.00 0.00 1003705.41 54041.99 858219.13 00:16:26.485 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:26.485 Verification LBA range: start 0x8000 length 0x8000 00:16:26.485 nvme0n1 : 5.74 133.69 8.36 0.00 0.00 906262.06 137121.48 1206669.00 00:16:26.485 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:26.485 Verification LBA range: start 0x0 length 0x8000 00:16:26.485 nvme0n2 : 5.75 130.67 8.17 0.00 0.00 932533.35 5696.59 1084066.26 00:16:26.485 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:26.485 Verification LBA range: start 0x8000 length 0x8000 00:16:26.485 nvme0n2 : 5.80 143.52 8.97 0.00 0.00 822004.31 7662.67 974369.08 00:16:26.485 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:26.485 Verification LBA range: start 0x0 length 0x8000 00:16:26.485 nvme0n3 : 5.74 128.11 8.01 0.00 0.00 930758.60 53235.40 1122782.92 00:16:26.485 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:26.485 Verification LBA range: start 0x8000 length 0x8000 00:16:26.485 nvme0n3 : 5.86 122.77 7.67 0.00 0.00 933232.08 64124.46 1806777.11 00:16:26.485 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:26.485 Verification LBA range: start 0x0 length 0xbd0b 00:16:26.485 nvme1n1 : 5.75 150.32 9.39 0.00 0.00 772040.22 54445.29 1445421.69 00:16:26.485 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:26.485 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:26.485 nvme1n1 : 5.83 128.95 8.06 0.00 0.00 856033.30 30852.33 1593835.52 00:16:26.485 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:26.485 Verification LBA range: start 0x0 length 0xa000 00:16:26.485 nvme2n1 : 5.75 111.31 6.96 0.00 0.00 1012571.53 5217.67 1264743.98 00:16:26.485 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:26.485 Verification LBA range: start 0xa000 length 0xa000 00:16:26.485 nvme2n1 : 5.94 147.40 9.21 0.00 0.00 726330.54 1235.10 1935832.62 00:16:26.485 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:26.485 Verification LBA range: start 0x0 length 0x2000 00:16:26.485 nvme3n1 : 5.76 149.96 9.37 0.00 0.00 736038.89 3982.57 929199.66 00:16:26.485 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:26.485 Verification LBA range: start 0x2000 length 0x2000 00:16:26.485 nvme3n1 : 6.06 237.47 14.84 0.00 0.00 438151.00 2596.23 1096971.82 00:16:26.485 [2024-11-21T01:42:10.442Z] =================================================================================================================== 00:16:26.485 [2024-11-21T01:42:10.442Z] Total : 1709.22 106.83 0.00 0.00 805184.69 1235.10 1935832.62 00:16:27.451 00:16:27.451 real 0m8.030s 00:16:27.451 user 0m14.678s 00:16:27.451 sys 0m0.457s 00:16:27.451 01:42:11 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.451 01:42:11 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.451 ************************************ 00:16:27.451 END TEST bdev_verify_big_io 00:16:27.451 ************************************ 00:16:27.451 01:42:11 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:27.451 01:42:11 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:27.451 01:42:11 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.451 01:42:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.451 ************************************ 00:16:27.451 START TEST bdev_write_zeroes 00:16:27.451 ************************************ 00:16:27.451 01:42:11 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:27.711 [2024-11-21 01:42:11.415851] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:16:27.711 [2024-11-21 01:42:11.416181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73051 ] 00:16:27.711 [2024-11-21 01:42:11.580330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.995 [2024-11-21 01:42:11.678545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.256 Running I/O for 1 seconds... 00:16:29.247 79808.00 IOPS, 311.75 MiB/s 00:16:29.247 Latency(us) 00:16:29.247 [2024-11-21T01:42:13.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.247 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:29.247 nvme0n1 : 1.02 13085.39 51.11 0.00 0.00 9771.71 6553.60 20568.22 00:16:29.247 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:29.247 nvme0n2 : 1.02 13069.69 51.05 0.00 0.00 9775.71 6654.42 19559.98 00:16:29.247 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:29.247 nvme0n3 : 1.02 13054.88 51.00 0.00 0.00 9778.53 6654.42 19862.45 00:16:29.247 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:29.247 nvme1n1 : 1.03 13778.03 53.82 0.00 0.00 9257.45 4486.70 17140.18 00:16:29.247 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:29.247 nvme2n1 : 1.02 13037.55 50.93 0.00 0.00 9729.08 3906.95 20467.40 00:16:29.247 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:29.247 nvme3n1 : 1.02 13022.61 50.87 0.00 0.00 9731.85 3906.95 20568.22 00:16:29.247 [2024-11-21T01:42:13.204Z] =================================================================================================================== 00:16:29.247 [2024-11-21T01:42:13.204Z] Total : 79048.15 308.78 0.00 0.00 9669.76 3906.95 20568.22 00:16:30.189 00:16:30.189 real 0m2.594s 00:16:30.189 user 0m1.888s 00:16:30.189 sys 0m0.509s 00:16:30.189 01:42:13 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.189 ************************************ 00:16:30.189 END TEST bdev_write_zeroes 00:16:30.189 ************************************ 00:16:30.189 01:42:13 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:30.189 01:42:13 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:30.189 01:42:13 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:30.189 01:42:13 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.189 01:42:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:30.189 ************************************ 00:16:30.189 START TEST bdev_json_nonenclosed 00:16:30.189 ************************************ 00:16:30.189 01:42:14 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:30.189 [2024-11-21 01:42:14.081283] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:16:30.189 [2024-11-21 01:42:14.081606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73093 ] 00:16:30.450 [2024-11-21 01:42:14.247843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.450 [2024-11-21 01:42:14.371659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.450 [2024-11-21 01:42:14.371754] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:30.450 [2024-11-21 01:42:14.371773] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:30.450 [2024-11-21 01:42:14.371784] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:30.711 00:16:30.711 real 0m0.558s 00:16:30.711 user 0m0.341s 00:16:30.711 sys 0m0.109s 00:16:30.711 01:42:14 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.711 01:42:14 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:30.711 ************************************ 00:16:30.711 END TEST bdev_json_nonenclosed 00:16:30.711 ************************************ 00:16:30.711 01:42:14 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:30.711 01:42:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:30.711 01:42:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.711 01:42:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:30.711 ************************************ 00:16:30.711 START TEST bdev_json_nonarray 00:16:30.711 ************************************ 00:16:30.711 01:42:14 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:30.973 [2024-11-21 01:42:14.701744] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:16:30.973 [2024-11-21 01:42:14.702088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73124 ] 00:16:30.973 [2024-11-21 01:42:14.865097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.234 [2024-11-21 01:42:14.983780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.234 [2024-11-21 01:42:14.983893] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:31.234 [2024-11-21 01:42:14.983913] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:31.234 [2024-11-21 01:42:14.983922] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:31.234 00:16:31.234 real 0m0.543s 00:16:31.234 user 0m0.321s 00:16:31.234 sys 0m0.116s 00:16:31.234 01:42:15 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.234 01:42:15 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:31.234 ************************************ 00:16:31.234 END TEST bdev_json_nonarray 00:16:31.234 ************************************ 00:16:31.495 01:42:15 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:16:31.495 01:42:15 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:16:31.495 01:42:15 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:16:31.495 01:42:15 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:31.495 01:42:15 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:16:31.495 01:42:15 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:31.495 01:42:15 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:31.495 01:42:15 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:31.495 01:42:15 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:31.495 01:42:15 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:31.496 01:42:15 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:31.496 01:42:15 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:32.068 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:37.361 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:37.361 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:37.361 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:37.361 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:37.361 ************************************ 00:16:37.361 END TEST blockdev_xnvme 00:16:37.361 ************************************ 00:16:37.361 00:16:37.361 real 0m55.915s 00:16:37.361 user 1m22.561s 00:16:37.361 sys 0m42.077s 00:16:37.361 01:42:21 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.361 01:42:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:37.361 01:42:21 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:37.361 01:42:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:37.361 01:42:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.361 01:42:21 -- common/autotest_common.sh@10 -- # set +x 00:16:37.361 ************************************ 00:16:37.361 START TEST ublk 00:16:37.361 ************************************ 00:16:37.361 01:42:21 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:37.361 * Looking for test storage... 00:16:37.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:37.361 01:42:21 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:37.361 01:42:21 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:16:37.361 01:42:21 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:37.623 01:42:21 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:37.623 01:42:21 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:37.623 01:42:21 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:37.623 01:42:21 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:37.623 01:42:21 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:37.623 01:42:21 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:37.623 01:42:21 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:37.623 01:42:21 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:37.623 01:42:21 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:37.623 01:42:21 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:37.623 01:42:21 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:37.623 01:42:21 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:37.623 01:42:21 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:37.623 01:42:21 ublk -- scripts/common.sh@345 -- # : 1 00:16:37.623 01:42:21 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:37.623 01:42:21 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:37.623 01:42:21 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:37.623 01:42:21 ublk -- scripts/common.sh@353 -- # local d=1 00:16:37.623 01:42:21 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:37.623 01:42:21 ublk -- scripts/common.sh@355 -- # echo 1 00:16:37.623 01:42:21 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:37.623 01:42:21 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:37.623 01:42:21 ublk -- scripts/common.sh@353 -- # local d=2 00:16:37.623 01:42:21 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:37.623 01:42:21 ublk -- scripts/common.sh@355 -- # echo 2 00:16:37.623 01:42:21 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:37.623 01:42:21 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:37.623 01:42:21 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:37.623 01:42:21 ublk -- scripts/common.sh@368 -- # return 0 00:16:37.623 01:42:21 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:37.623 01:42:21 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:37.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.623 --rc genhtml_branch_coverage=1 00:16:37.623 --rc genhtml_function_coverage=1 00:16:37.623 --rc genhtml_legend=1 00:16:37.623 --rc geninfo_all_blocks=1 00:16:37.623 --rc geninfo_unexecuted_blocks=1 00:16:37.623 00:16:37.623 ' 00:16:37.623 01:42:21 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:37.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.623 --rc genhtml_branch_coverage=1 00:16:37.623 --rc genhtml_function_coverage=1 00:16:37.623 --rc genhtml_legend=1 00:16:37.623 --rc geninfo_all_blocks=1 00:16:37.623 --rc geninfo_unexecuted_blocks=1 00:16:37.623 00:16:37.623 ' 00:16:37.623 01:42:21 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:37.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.623 --rc genhtml_branch_coverage=1 00:16:37.623 --rc genhtml_function_coverage=1 00:16:37.623 --rc genhtml_legend=1 00:16:37.623 --rc geninfo_all_blocks=1 00:16:37.623 --rc geninfo_unexecuted_blocks=1 00:16:37.623 00:16:37.623 ' 00:16:37.623 01:42:21 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:37.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.623 --rc genhtml_branch_coverage=1 00:16:37.623 --rc genhtml_function_coverage=1 00:16:37.623 --rc genhtml_legend=1 00:16:37.623 --rc geninfo_all_blocks=1 00:16:37.623 --rc geninfo_unexecuted_blocks=1 00:16:37.623 00:16:37.623 ' 00:16:37.623 01:42:21 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:37.623 01:42:21 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:37.623 01:42:21 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:37.623 01:42:21 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:37.623 01:42:21 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:37.623 01:42:21 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:37.623 01:42:21 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:37.623 01:42:21 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:37.623 01:42:21 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:37.623 01:42:21 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:37.623 01:42:21 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:37.623 01:42:21 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:37.623 01:42:21 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:37.623 01:42:21 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:37.623 01:42:21 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:37.623 01:42:21 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:37.623 01:42:21 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:37.623 01:42:21 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:37.623 01:42:21 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:37.623 01:42:21 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:37.623 01:42:21 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:37.623 01:42:21 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.623 01:42:21 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.623 ************************************ 00:16:37.623 START TEST test_save_ublk_config 00:16:37.623 ************************************ 00:16:37.623 01:42:21 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:37.623 01:42:21 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:37.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.623 01:42:21 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73414 00:16:37.623 01:42:21 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:37.623 01:42:21 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73414 00:16:37.623 01:42:21 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73414 ']' 00:16:37.623 01:42:21 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.623 01:42:21 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:37.623 01:42:21 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.623 01:42:21 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.623 01:42:21 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.623 01:42:21 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:37.623 [2024-11-21 01:42:21.497552] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:16:37.623 [2024-11-21 01:42:21.497716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73414 ] 00:16:37.885 [2024-11-21 01:42:21.659576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.885 [2024-11-21 01:42:21.809747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.829 01:42:22 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:38.829 01:42:22 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:38.829 01:42:22 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:38.829 01:42:22 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:38.829 01:42:22 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.829 01:42:22 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:38.829 [2024-11-21 01:42:22.625643] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:38.829 [2024-11-21 01:42:22.626627] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:38.829 malloc0 00:16:38.829 [2024-11-21 01:42:22.705785] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:38.829 [2024-11-21 01:42:22.705904] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:38.829 [2024-11-21 01:42:22.705917] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:38.829 [2024-11-21 01:42:22.705925] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:38.829 [2024-11-21 01:42:22.714801] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:38.829 [2024-11-21 01:42:22.714833] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:38.829 [2024-11-21 01:42:22.721658] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:38.829 [2024-11-21 01:42:22.721801] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:38.829 [2024-11-21 01:42:22.738647] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:38.829 0 00:16:38.829 01:42:22 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.829 01:42:22 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:38.829 01:42:22 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.829 01:42:22 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:39.091 01:42:23 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.091 01:42:23 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:39.091 "subsystems": [ 00:16:39.091 { 00:16:39.091 "subsystem": "fsdev", 00:16:39.091 "config": [ 00:16:39.091 { 00:16:39.091 "method": "fsdev_set_opts", 00:16:39.091 "params": { 00:16:39.091 "fsdev_io_pool_size": 65535, 00:16:39.091 "fsdev_io_cache_size": 256 00:16:39.091 } 00:16:39.091 } 00:16:39.091 ] 00:16:39.091 }, 00:16:39.091 { 00:16:39.091 "subsystem": "keyring", 00:16:39.091 "config": [] 00:16:39.091 }, 00:16:39.091 { 00:16:39.091 "subsystem": "iobuf", 00:16:39.091 "config": [ 00:16:39.091 { 00:16:39.091 "method": "iobuf_set_options", 00:16:39.091 "params": { 00:16:39.091 "small_pool_count": 8192, 00:16:39.091 "large_pool_count": 1024, 00:16:39.091 "small_bufsize": 8192, 00:16:39.091 "large_bufsize": 135168, 00:16:39.091 "enable_numa": false 00:16:39.091 } 00:16:39.091 } 00:16:39.091 ] 00:16:39.091 }, 00:16:39.091 { 00:16:39.091 "subsystem": "sock", 00:16:39.091 "config": [ 00:16:39.091 { 00:16:39.091 "method": "sock_set_default_impl", 00:16:39.091 "params": { 00:16:39.091 "impl_name": "posix" 00:16:39.091 } 00:16:39.091 }, 00:16:39.091 { 00:16:39.092 "method": "sock_impl_set_options", 00:16:39.092 "params": { 00:16:39.092 "impl_name": "ssl", 00:16:39.092 "recv_buf_size": 4096, 00:16:39.092 "send_buf_size": 4096, 00:16:39.092 "enable_recv_pipe": true, 00:16:39.092 "enable_quickack": false, 00:16:39.092 "enable_placement_id": 0, 00:16:39.092 "enable_zerocopy_send_server": true, 00:16:39.092 "enable_zerocopy_send_client": false, 00:16:39.092 "zerocopy_threshold": 0, 00:16:39.092 "tls_version": 0, 00:16:39.092 "enable_ktls": false 00:16:39.092 } 00:16:39.092 }, 00:16:39.092 { 00:16:39.092 "method": "sock_impl_set_options", 00:16:39.092 "params": { 00:16:39.092 "impl_name": "posix", 00:16:39.092 "recv_buf_size": 2097152, 00:16:39.092 "send_buf_size": 2097152, 00:16:39.092 "enable_recv_pipe": true, 00:16:39.092 "enable_quickack": false, 00:16:39.092 "enable_placement_id": 0, 00:16:39.092 "enable_zerocopy_send_server": true, 00:16:39.092 "enable_zerocopy_send_client": false, 00:16:39.092 "zerocopy_threshold": 0, 00:16:39.092 "tls_version": 0, 00:16:39.092 "enable_ktls": false 00:16:39.092 } 00:16:39.092 } 00:16:39.092 ] 00:16:39.092 }, 00:16:39.092 { 00:16:39.092 "subsystem": "vmd", 00:16:39.092 "config": [] 00:16:39.092 }, 00:16:39.092 { 00:16:39.092 "subsystem": "accel", 00:16:39.092 "config": [ 00:16:39.092 { 00:16:39.092 "method": "accel_set_options", 00:16:39.092 "params": { 00:16:39.092 "small_cache_size": 128, 00:16:39.092 "large_cache_size": 16, 00:16:39.092 "task_count": 2048, 00:16:39.092 "sequence_count": 2048, 00:16:39.092 "buf_count": 2048 00:16:39.092 } 00:16:39.092 } 00:16:39.092 ] 00:16:39.092 }, 00:16:39.092 { 00:16:39.092 "subsystem": "bdev", 00:16:39.092 "config": [ 00:16:39.092 { 00:16:39.092 "method": "bdev_set_options", 00:16:39.092 "params": { 00:16:39.092 "bdev_io_pool_size": 65535, 00:16:39.092 "bdev_io_cache_size": 256, 00:16:39.092 "bdev_auto_examine": true, 00:16:39.092 "iobuf_small_cache_size": 128, 00:16:39.092 "iobuf_large_cache_size": 16 00:16:39.092 } 00:16:39.092 }, 00:16:39.092 { 00:16:39.092 "method": "bdev_raid_set_options", 00:16:39.092 "params": { 00:16:39.092 "process_window_size_kb": 1024, 00:16:39.092 "process_max_bandwidth_mb_sec": 0 00:16:39.092 } 00:16:39.092 }, 00:16:39.092 { 00:16:39.092 "method": "bdev_iscsi_set_options", 00:16:39.092 "params": { 00:16:39.092 "timeout_sec": 30 00:16:39.092 } 00:16:39.092 }, 00:16:39.092 { 00:16:39.092 "method": "bdev_nvme_set_options", 00:16:39.092 "params": { 00:16:39.092 "action_on_timeout": "none", 00:16:39.092 "timeout_us": 0, 00:16:39.092 "timeout_admin_us": 0, 00:16:39.092 "keep_alive_timeout_ms": 10000, 00:16:39.092 "arbitration_burst": 0, 00:16:39.092 "low_priority_weight": 0, 00:16:39.092 "medium_priority_weight": 0, 00:16:39.092 "high_priority_weight": 0, 00:16:39.092 "nvme_adminq_poll_period_us": 10000, 00:16:39.092 "nvme_ioq_poll_period_us": 0, 00:16:39.092 "io_queue_requests": 0, 00:16:39.092 "delay_cmd_submit": true, 00:16:39.092 "transport_retry_count": 4, 00:16:39.092 "bdev_retry_count": 3, 00:16:39.092 "transport_ack_timeout": 0, 00:16:39.092 "ctrlr_loss_timeout_sec": 0, 00:16:39.092 "reconnect_delay_sec": 0, 00:16:39.092 "fast_io_fail_timeout_sec": 0, 00:16:39.092 "disable_auto_failback": false, 00:16:39.092 "generate_uuids": false, 00:16:39.092 "transport_tos": 0, 00:16:39.092 "nvme_error_stat": false, 00:16:39.092 "rdma_srq_size": 0, 00:16:39.092 "io_path_stat": false, 00:16:39.092 "allow_accel_sequence": false, 00:16:39.092 "rdma_max_cq_size": 0, 00:16:39.092 "rdma_cm_event_timeout_ms": 0, 00:16:39.092 "dhchap_digests": [ 00:16:39.092 "sha256", 00:16:39.092 "sha384", 00:16:39.092 "sha512" 00:16:39.092 ], 00:16:39.092 "dhchap_dhgroups": [ 00:16:39.092 "null", 00:16:39.092 "ffdhe2048", 00:16:39.092 "ffdhe3072", 00:16:39.092 "ffdhe4096", 00:16:39.092 "ffdhe6144", 00:16:39.092 "ffdhe8192" 00:16:39.092 ] 00:16:39.092 } 00:16:39.092 }, 00:16:39.092 { 00:16:39.092 "method": "bdev_nvme_set_hotplug", 00:16:39.092 "params": { 00:16:39.092 "period_us": 100000, 00:16:39.092 "enable": false 00:16:39.092 } 00:16:39.092 }, 00:16:39.092 { 00:16:39.092 "method": "bdev_malloc_create", 00:16:39.092 "params": { 00:16:39.092 "name": "malloc0", 00:16:39.092 "num_blocks": 8192, 00:16:39.092 "block_size": 4096, 00:16:39.092 "physical_block_size": 4096, 00:16:39.092 "uuid": "f5418c1f-dfb6-483c-918f-043b0bc11c06", 00:16:39.092 "optimal_io_boundary": 0, 00:16:39.092 "md_size": 0, 00:16:39.092 "dif_type": 0, 00:16:39.092 "dif_is_head_of_md": false, 00:16:39.092 "dif_pi_format": 0 00:16:39.092 } 00:16:39.092 }, 00:16:39.092 { 00:16:39.092 "method": "bdev_wait_for_examine" 00:16:39.092 } 00:16:39.092 ] 00:16:39.092 }, 00:16:39.092 { 00:16:39.092 "subsystem": "scsi", 00:16:39.092 "config": null 00:16:39.092 }, 00:16:39.092 { 00:16:39.092 "subsystem": "scheduler", 00:16:39.092 "config": [ 00:16:39.092 { 00:16:39.092 "method": "framework_set_scheduler", 00:16:39.092 "params": { 00:16:39.092 "name": "static" 00:16:39.092 } 00:16:39.092 } 00:16:39.092 ] 00:16:39.092 }, 00:16:39.092 { 00:16:39.092 "subsystem": "vhost_scsi", 00:16:39.092 "config": [] 00:16:39.092 }, 00:16:39.092 { 00:16:39.092 "subsystem": "vhost_blk", 00:16:39.092 "config": [] 00:16:39.092 }, 00:16:39.092 { 00:16:39.092 "subsystem": "ublk", 00:16:39.092 "config": [ 00:16:39.092 { 00:16:39.092 "method": "ublk_create_target", 00:16:39.092 "params": { 00:16:39.092 "cpumask": "1" 00:16:39.092 } 00:16:39.092 }, 00:16:39.092 { 00:16:39.092 "method": "ublk_start_disk", 00:16:39.092 "params": { 00:16:39.092 "bdev_name": "malloc0", 00:16:39.092 "ublk_id": 0, 00:16:39.092 "num_queues": 1, 00:16:39.092 "queue_depth": 128 00:16:39.092 } 00:16:39.092 } 00:16:39.092 ] 00:16:39.092 }, 00:16:39.092 { 00:16:39.092 "subsystem": "nbd", 00:16:39.092 "config": [] 00:16:39.092 }, 00:16:39.092 { 00:16:39.092 "subsystem": "nvmf", 00:16:39.092 "config": [ 00:16:39.092 { 00:16:39.092 "method": "nvmf_set_config", 00:16:39.092 "params": { 00:16:39.092 "discovery_filter": "match_any", 00:16:39.092 "admin_cmd_passthru": { 00:16:39.092 "identify_ctrlr": false 00:16:39.092 }, 00:16:39.092 "dhchap_digests": [ 00:16:39.092 "sha256", 00:16:39.092 "sha384", 00:16:39.092 "sha512" 00:16:39.092 ], 00:16:39.092 "dhchap_dhgroups": [ 00:16:39.092 "null", 00:16:39.092 "ffdhe2048", 00:16:39.092 "ffdhe3072", 00:16:39.092 "ffdhe4096", 00:16:39.092 "ffdhe6144", 00:16:39.092 "ffdhe8192" 00:16:39.092 ] 00:16:39.092 } 00:16:39.093 }, 00:16:39.093 { 00:16:39.093 "method": "nvmf_set_max_subsystems", 00:16:39.093 "params": { 00:16:39.093 "max_subsystems": 1024 00:16:39.093 } 00:16:39.093 }, 00:16:39.093 { 00:16:39.093 "method": "nvmf_set_crdt", 00:16:39.093 "params": { 00:16:39.093 "crdt1": 0, 00:16:39.093 "crdt2": 0, 00:16:39.093 "crdt3": 0 00:16:39.093 } 00:16:39.093 } 00:16:39.093 ] 00:16:39.093 }, 00:16:39.093 { 00:16:39.093 "subsystem": "iscsi", 00:16:39.093 "config": [ 00:16:39.093 { 00:16:39.093 "method": "iscsi_set_options", 00:16:39.093 "params": { 00:16:39.093 "node_base": "iqn.2016-06.io.spdk", 00:16:39.093 "max_sessions": 128, 00:16:39.093 "max_connections_per_session": 2, 00:16:39.093 "max_queue_depth": 64, 00:16:39.093 "default_time2wait": 2, 00:16:39.093 "default_time2retain": 20, 00:16:39.093 "first_burst_length": 8192, 00:16:39.093 "immediate_data": true, 00:16:39.093 "allow_duplicated_isid": false, 00:16:39.093 "error_recovery_level": 0, 00:16:39.093 "nop_timeout": 60, 00:16:39.093 "nop_in_interval": 30, 00:16:39.093 "disable_chap": false, 00:16:39.093 "require_chap": false, 00:16:39.093 "mutual_chap": false, 00:16:39.093 "chap_group": 0, 00:16:39.093 "max_large_datain_per_connection": 64, 00:16:39.093 "max_r2t_per_connection": 4, 00:16:39.093 "pdu_pool_size": 36864, 00:16:39.093 "immediate_data_pool_size": 16384, 00:16:39.093 "data_out_pool_size": 2048 00:16:39.093 } 00:16:39.093 } 00:16:39.093 ] 00:16:39.093 } 00:16:39.093 ] 00:16:39.093 }' 00:16:39.093 01:42:23 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73414 00:16:39.093 01:42:23 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73414 ']' 00:16:39.093 01:42:23 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73414 00:16:39.093 01:42:23 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:39.093 01:42:23 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.093 01:42:23 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73414 00:16:39.354 01:42:23 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:39.354 killing process with pid 73414 00:16:39.354 01:42:23 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:39.354 01:42:23 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73414' 00:16:39.354 01:42:23 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73414 00:16:39.354 01:42:23 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73414 00:16:40.299 [2024-11-21 01:42:24.234192] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:40.558 [2024-11-21 01:42:24.274799] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:40.558 [2024-11-21 01:42:24.274955] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:40.558 [2024-11-21 01:42:24.282666] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:40.558 [2024-11-21 01:42:24.282728] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:40.558 [2024-11-21 01:42:24.282742] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:40.558 [2024-11-21 01:42:24.282773] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:40.558 [2024-11-21 01:42:24.282918] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:41.934 01:42:25 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73473 00:16:41.934 01:42:25 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73473 00:16:41.934 01:42:25 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73473 ']' 00:16:41.934 01:42:25 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.934 01:42:25 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.934 01:42:25 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.934 01:42:25 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.935 01:42:25 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:41.935 01:42:25 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:41.935 01:42:25 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:41.935 "subsystems": [ 00:16:41.935 { 00:16:41.935 "subsystem": "fsdev", 00:16:41.935 "config": [ 00:16:41.935 { 00:16:41.935 "method": "fsdev_set_opts", 00:16:41.935 "params": { 00:16:41.935 "fsdev_io_pool_size": 65535, 00:16:41.935 "fsdev_io_cache_size": 256 00:16:41.935 } 00:16:41.935 } 00:16:41.935 ] 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "subsystem": "keyring", 00:16:41.935 "config": [] 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "subsystem": "iobuf", 00:16:41.935 "config": [ 00:16:41.935 { 00:16:41.935 "method": "iobuf_set_options", 00:16:41.935 "params": { 00:16:41.935 "small_pool_count": 8192, 00:16:41.935 "large_pool_count": 1024, 00:16:41.935 "small_bufsize": 8192, 00:16:41.935 "large_bufsize": 135168, 00:16:41.935 "enable_numa": false 00:16:41.935 } 00:16:41.935 } 00:16:41.935 ] 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "subsystem": "sock", 00:16:41.935 "config": [ 00:16:41.935 { 00:16:41.935 "method": "sock_set_default_impl", 00:16:41.935 "params": { 00:16:41.935 "impl_name": "posix" 00:16:41.935 } 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "method": "sock_impl_set_options", 00:16:41.935 "params": { 00:16:41.935 "impl_name": "ssl", 00:16:41.935 "recv_buf_size": 4096, 00:16:41.935 "send_buf_size": 4096, 00:16:41.935 "enable_recv_pipe": true, 00:16:41.935 "enable_quickack": false, 00:16:41.935 "enable_placement_id": 0, 00:16:41.935 "enable_zerocopy_send_server": true, 00:16:41.935 "enable_zerocopy_send_client": false, 00:16:41.935 "zerocopy_threshold": 0, 00:16:41.935 "tls_version": 0, 00:16:41.935 "enable_ktls": false 00:16:41.935 } 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "method": "sock_impl_set_options", 00:16:41.935 "params": { 00:16:41.935 "impl_name": "posix", 00:16:41.935 "recv_buf_size": 2097152, 00:16:41.935 "send_buf_size": 2097152, 00:16:41.935 "enable_recv_pipe": true, 00:16:41.935 "enable_quickack": false, 00:16:41.935 "enable_placement_id": 0, 00:16:41.935 "enable_zerocopy_send_server": true, 00:16:41.935 "enable_zerocopy_send_client": false, 00:16:41.935 "zerocopy_threshold": 0, 00:16:41.935 "tls_version": 0, 00:16:41.935 "enable_ktls": false 00:16:41.935 } 00:16:41.935 } 00:16:41.935 ] 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "subsystem": "vmd", 00:16:41.935 "config": [] 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "subsystem": "accel", 00:16:41.935 "config": [ 00:16:41.935 { 00:16:41.935 "method": "accel_set_options", 00:16:41.935 "params": { 00:16:41.935 "small_cache_size": 128, 00:16:41.935 "large_cache_size": 16, 00:16:41.935 "task_count": 2048, 00:16:41.935 "sequence_count": 2048, 00:16:41.935 "buf_count": 2048 00:16:41.935 } 00:16:41.935 } 00:16:41.935 ] 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "subsystem": "bdev", 00:16:41.935 "config": [ 00:16:41.935 { 00:16:41.935 "method": "bdev_set_options", 00:16:41.935 "params": { 00:16:41.935 "bdev_io_pool_size": 65535, 00:16:41.935 "bdev_io_cache_size": 256, 00:16:41.935 "bdev_auto_examine": true, 00:16:41.935 "iobuf_small_cache_size": 128, 00:16:41.935 "iobuf_large_cache_size": 16 00:16:41.935 } 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "method": "bdev_raid_set_options", 00:16:41.935 "params": { 00:16:41.935 "process_window_size_kb": 1024, 00:16:41.935 "process_max_bandwidth_mb_sec": 0 00:16:41.935 } 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "method": "bdev_iscsi_set_options", 00:16:41.935 "params": { 00:16:41.935 "timeout_sec": 30 00:16:41.935 } 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "method": "bdev_nvme_set_options", 00:16:41.935 "params": { 00:16:41.935 "action_on_timeout": "none", 00:16:41.935 "timeout_us": 0, 00:16:41.935 "timeout_admin_us": 0, 00:16:41.935 "keep_alive_timeout_ms": 10000, 00:16:41.935 "arbitration_burst": 0, 00:16:41.935 "low_priority_weight": 0, 00:16:41.935 "medium_priority_weight": 0, 00:16:41.935 "high_priority_weight": 0, 00:16:41.935 "nvme_adminq_poll_period_us": 10000, 00:16:41.935 "nvme_ioq_poll_period_us": 0, 00:16:41.935 "io_queue_requests": 0, 00:16:41.935 "delay_cmd_submit": true, 00:16:41.935 "transport_retry_count": 4, 00:16:41.935 "bdev_retry_count": 3, 00:16:41.935 "transport_ack_timeout": 0, 00:16:41.935 "ctrlr_loss_timeout_sec": 0, 00:16:41.935 "reconnect_delay_sec": 0, 00:16:41.935 "fast_io_fail_timeout_sec": 0, 00:16:41.935 "disable_auto_failback": false, 00:16:41.935 "generate_uuids": false, 00:16:41.935 "transport_tos": 0, 00:16:41.935 "nvme_error_stat": false, 00:16:41.935 "rdma_srq_size": 0, 00:16:41.935 "io_path_stat": false, 00:16:41.935 "allow_accel_sequence": false, 00:16:41.935 "rdma_max_cq_size": 0, 00:16:41.935 "rdma_cm_event_timeout_ms": 0, 00:16:41.935 "dhchap_digests": [ 00:16:41.935 "sha256", 00:16:41.935 "sha384", 00:16:41.935 "sha512" 00:16:41.935 ], 00:16:41.935 "dhchap_dhgroups": [ 00:16:41.935 "null", 00:16:41.935 "ffdhe2048", 00:16:41.935 "ffdhe3072", 00:16:41.935 "ffdhe4096", 00:16:41.935 "ffdhe6144", 00:16:41.935 "ffdhe8192" 00:16:41.935 ] 00:16:41.935 } 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "method": "bdev_nvme_set_hotplug", 00:16:41.935 "params": { 00:16:41.935 "period_us": 100000, 00:16:41.935 "enable": false 00:16:41.935 } 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "method": "bdev_malloc_create", 00:16:41.935 "params": { 00:16:41.935 "name": "malloc0", 00:16:41.935 "num_blocks": 8192, 00:16:41.935 "block_size": 4096, 00:16:41.935 "physical_block_size": 4096, 00:16:41.935 "uuid": "f5418c1f-dfb6-483c-918f-043b0bc11c06", 00:16:41.935 "optimal_io_boundary": 0, 00:16:41.935 "md_size": 0, 00:16:41.935 "dif_type": 0, 00:16:41.935 "dif_is_head_of_md": false, 00:16:41.935 "dif_pi_format": 0 00:16:41.935 } 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "method": "bdev_wait_for_examine" 00:16:41.935 } 00:16:41.935 ] 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "subsystem": "scsi", 00:16:41.935 "config": null 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "subsystem": "scheduler", 00:16:41.935 "config": [ 00:16:41.935 { 00:16:41.935 "method": "framework_set_scheduler", 00:16:41.935 "params": { 00:16:41.935 "name": "static" 00:16:41.935 } 00:16:41.935 } 00:16:41.935 ] 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "subsystem": "vhost_scsi", 00:16:41.935 "config": [] 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "subsystem": "vhost_blk", 00:16:41.935 "config": [] 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "subsystem": "ublk", 00:16:41.935 "config": [ 00:16:41.935 { 00:16:41.935 "method": "ublk_create_target", 00:16:41.935 "params": { 00:16:41.935 "cpumask": "1" 00:16:41.935 } 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "method": "ublk_start_disk", 00:16:41.935 "params": { 00:16:41.935 "bdev_name": "malloc0", 00:16:41.935 "ublk_id": 0, 00:16:41.935 "num_queues": 1, 00:16:41.935 "queue_depth": 128 00:16:41.935 } 00:16:41.935 } 00:16:41.935 ] 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "subsystem": "nbd", 00:16:41.935 "config": [] 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "subsystem": "nvmf", 00:16:41.935 "config": [ 00:16:41.935 { 00:16:41.935 "method": "nvmf_set_config", 00:16:41.935 "params": { 00:16:41.935 "discovery_filter": "match_any", 00:16:41.935 "admin_cmd_passthru": { 00:16:41.935 "identify_ctrlr": false 00:16:41.935 }, 00:16:41.935 "dhchap_digests": [ 00:16:41.935 "sha256", 00:16:41.935 "sha384", 00:16:41.935 "sha512" 00:16:41.935 ], 00:16:41.935 "dhchap_dhgroups": [ 00:16:41.935 "null", 00:16:41.935 "ffdhe2048", 00:16:41.935 "ffdhe3072", 00:16:41.935 "ffdhe4096", 00:16:41.935 "ffdhe6144", 00:16:41.935 "ffdhe8192" 00:16:41.935 ] 00:16:41.935 } 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "method": "nvmf_set_max_subsystems", 00:16:41.935 "params": { 00:16:41.935 "max_subsystems": 1024 00:16:41.935 } 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "method": "nvmf_set_crdt", 00:16:41.935 "params": { 00:16:41.935 "crdt1": 0, 00:16:41.935 "crdt2": 0, 00:16:41.935 "crdt3": 0 00:16:41.935 } 00:16:41.935 } 00:16:41.935 ] 00:16:41.935 }, 00:16:41.935 { 00:16:41.935 "subsystem": "iscsi", 00:16:41.935 "config": [ 00:16:41.935 { 00:16:41.935 "method": "iscsi_set_options", 00:16:41.935 "params": { 00:16:41.936 "node_base": "iqn.2016-06.io.spdk", 00:16:41.936 "max_sessions": 128, 00:16:41.936 "max_connections_per_session": 2, 00:16:41.936 "max_queue_depth": 64, 00:16:41.936 "default_time2wait": 2, 00:16:41.936 "default_time2retain": 20, 00:16:41.936 "first_burst_length": 8192, 00:16:41.936 "immediate_data": true, 00:16:41.936 "allow_duplicated_isid": false, 00:16:41.936 "error_recovery_level": 0, 00:16:41.936 "nop_timeout": 60, 00:16:41.936 "nop_in_interval": 30, 00:16:41.936 "disable_chap": false, 00:16:41.936 "require_chap": false, 00:16:41.936 "mutual_chap": false, 00:16:41.936 "chap_group": 0, 00:16:41.936 "max_large_datain_per_connection": 64, 00:16:41.936 "max_r2t_per_connection": 4, 00:16:41.936 "pdu_pool_size": 36864, 00:16:41.936 "immediate_data_pool_size": 16384, 00:16:41.936 "data_out_pool_size": 2048 00:16:41.936 } 00:16:41.936 } 00:16:41.936 ] 00:16:41.936 } 00:16:41.936 ] 00:16:41.936 }' 00:16:41.936 [2024-11-21 01:42:25.584470] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:16:41.936 [2024-11-21 01:42:25.584588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73473 ] 00:16:41.936 [2024-11-21 01:42:25.745177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.197 [2024-11-21 01:42:25.887771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.140 [2024-11-21 01:42:26.850636] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:43.140 [2024-11-21 01:42:26.851658] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:43.140 [2024-11-21 01:42:26.858784] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:43.140 [2024-11-21 01:42:26.858892] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:43.140 [2024-11-21 01:42:26.858904] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:43.140 [2024-11-21 01:42:26.858913] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:43.140 [2024-11-21 01:42:26.867767] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:43.140 [2024-11-21 01:42:26.867797] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:43.140 [2024-11-21 01:42:26.874666] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:43.140 [2024-11-21 01:42:26.874796] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:43.140 [2024-11-21 01:42:26.891636] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:43.140 01:42:26 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.140 01:42:26 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:43.140 01:42:26 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:43.140 01:42:26 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:43.140 01:42:26 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.140 01:42:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:43.140 01:42:26 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.140 01:42:26 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:43.140 01:42:26 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:43.140 01:42:26 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73473 00:16:43.140 01:42:26 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73473 ']' 00:16:43.140 01:42:26 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73473 00:16:43.140 01:42:26 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:43.140 01:42:26 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.140 01:42:26 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73473 00:16:43.140 01:42:26 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.140 killing process with pid 73473 00:16:43.140 01:42:26 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.140 01:42:26 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73473' 00:16:43.140 01:42:26 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73473 00:16:43.140 01:42:26 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73473 00:16:44.523 [2024-11-21 01:42:28.223072] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:44.523 [2024-11-21 01:42:28.258648] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:44.523 [2024-11-21 01:42:28.258753] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:44.523 [2024-11-21 01:42:28.266634] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:44.523 [2024-11-21 01:42:28.266678] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:44.523 [2024-11-21 01:42:28.266686] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:44.523 [2024-11-21 01:42:28.266708] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:44.523 [2024-11-21 01:42:28.266825] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:45.897 01:42:29 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:45.897 00:16:45.897 real 0m8.084s 00:16:45.897 user 0m5.549s 00:16:45.897 sys 0m3.199s 00:16:45.897 01:42:29 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:45.897 ************************************ 00:16:45.897 END TEST test_save_ublk_config 00:16:45.897 ************************************ 00:16:45.897 01:42:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:45.897 01:42:29 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73548 00:16:45.897 01:42:29 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:45.897 01:42:29 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73548 00:16:45.897 01:42:29 ublk -- common/autotest_common.sh@835 -- # '[' -z 73548 ']' 00:16:45.897 01:42:29 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.897 01:42:29 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:45.897 01:42:29 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.897 01:42:29 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.897 01:42:29 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.897 01:42:29 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:45.897 [2024-11-21 01:42:29.614936] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:16:45.897 [2024-11-21 01:42:29.615036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73548 ] 00:16:45.897 [2024-11-21 01:42:29.767776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:46.156 [2024-11-21 01:42:29.861331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.156 [2024-11-21 01:42:29.861400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.723 01:42:30 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.723 01:42:30 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:46.723 01:42:30 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:46.723 01:42:30 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:46.723 01:42:30 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.723 01:42:30 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.723 ************************************ 00:16:46.723 START TEST test_create_ublk 00:16:46.723 ************************************ 00:16:46.723 01:42:30 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:46.723 01:42:30 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:46.723 01:42:30 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.723 01:42:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.723 [2024-11-21 01:42:30.466633] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:46.723 [2024-11-21 01:42:30.468299] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:46.723 01:42:30 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.723 01:42:30 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:46.723 01:42:30 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:46.723 01:42:30 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.723 01:42:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.723 01:42:30 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.723 01:42:30 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:46.723 01:42:30 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:46.723 01:42:30 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.723 01:42:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.723 [2024-11-21 01:42:30.634745] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:46.723 [2024-11-21 01:42:30.635074] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:46.723 [2024-11-21 01:42:30.635089] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:46.723 [2024-11-21 01:42:30.635096] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:46.723 [2024-11-21 01:42:30.643862] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:46.723 [2024-11-21 01:42:30.643879] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:46.723 [2024-11-21 01:42:30.650637] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:46.723 [2024-11-21 01:42:30.658677] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:46.723 [2024-11-21 01:42:30.673653] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:46.982 01:42:30 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.982 01:42:30 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:46.982 01:42:30 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:46.982 01:42:30 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:46.982 01:42:30 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.982 01:42:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.982 01:42:30 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.982 01:42:30 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:46.982 { 00:16:46.982 "ublk_device": "/dev/ublkb0", 00:16:46.982 "id": 0, 00:16:46.982 "queue_depth": 512, 00:16:46.982 "num_queues": 4, 00:16:46.982 "bdev_name": "Malloc0" 00:16:46.982 } 00:16:46.982 ]' 00:16:46.982 01:42:30 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:46.982 01:42:30 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:46.982 01:42:30 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:46.982 01:42:30 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:46.982 01:42:30 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:46.982 01:42:30 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:46.982 01:42:30 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:46.982 01:42:30 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:46.982 01:42:30 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:46.982 01:42:30 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:46.982 01:42:30 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:46.982 01:42:30 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:46.982 01:42:30 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:46.982 01:42:30 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:46.982 01:42:30 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:46.982 01:42:30 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:46.982 01:42:30 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:46.982 01:42:30 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:46.982 01:42:30 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:46.982 01:42:30 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:46.982 01:42:30 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:46.982 01:42:30 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:47.241 fio: verification read phase will never start because write phase uses all of runtime 00:16:47.241 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:47.241 fio-3.35 00:16:47.241 Starting 1 process 00:16:57.235 00:16:57.235 fio_test: (groupid=0, jobs=1): err= 0: pid=73593: Thu Nov 21 01:42:41 2024 00:16:57.235 write: IOPS=13.5k, BW=52.7MiB/s (55.3MB/s)(527MiB/10001msec); 0 zone resets 00:16:57.235 clat (usec): min=34, max=12300, avg=73.30, stdev=133.46 00:16:57.235 lat (usec): min=35, max=12315, avg=73.76, stdev=133.52 00:16:57.235 clat percentiles (usec): 00:16:57.235 | 1.00th=[ 50], 5.00th=[ 53], 10.00th=[ 56], 20.00th=[ 59], 00:16:57.235 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 64], 60.00th=[ 65], 00:16:57.235 | 70.00th=[ 68], 80.00th=[ 71], 90.00th=[ 79], 95.00th=[ 114], 00:16:57.235 | 99.00th=[ 153], 99.50th=[ 245], 99.90th=[ 2868], 99.95th=[ 3556], 00:16:57.235 | 99.99th=[ 4080] 00:16:57.235 bw ( KiB/s): min=21472, max=67464, per=99.57%, avg=53759.16, stdev=12904.05, samples=19 00:16:57.235 iops : min= 5368, max=16866, avg=13439.79, stdev=3226.01, samples=19 00:16:57.235 lat (usec) : 50=1.20%, 100=92.17%, 250=6.16%, 500=0.26%, 750=0.01% 00:16:57.235 lat (usec) : 1000=0.01% 00:16:57.235 lat (msec) : 2=0.05%, 4=0.12%, 10=0.02%, 20=0.01% 00:16:57.235 cpu : usr=2.09%, sys=9.83%, ctx=134985, majf=0, minf=796 00:16:57.235 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:57.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.235 issued rwts: total=0,134985,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.235 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:57.235 00:16:57.235 Run status group 0 (all jobs): 00:16:57.235 WRITE: bw=52.7MiB/s (55.3MB/s), 52.7MiB/s-52.7MiB/s (55.3MB/s-55.3MB/s), io=527MiB (553MB), run=10001-10001msec 00:16:57.235 00:16:57.235 Disk stats (read/write): 00:16:57.235 ublkb0: ios=0/133460, merge=0/0, ticks=0/8660, in_queue=8660, util=99.09% 00:16:57.235 01:42:41 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:57.235 [2024-11-21 01:42:41.096811] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:57.235 [2024-11-21 01:42:41.131668] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:57.235 [2024-11-21 01:42:41.132342] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:57.235 [2024-11-21 01:42:41.135865] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:57.235 [2024-11-21 01:42:41.136118] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:57.235 [2024-11-21 01:42:41.136128] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.235 01:42:41 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:57.235 [2024-11-21 01:42:41.154690] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:57.235 request: 00:16:57.235 { 00:16:57.235 "ublk_id": 0, 00:16:57.235 "method": "ublk_stop_disk", 00:16:57.235 "req_id": 1 00:16:57.235 } 00:16:57.235 Got JSON-RPC error response 00:16:57.235 response: 00:16:57.235 { 00:16:57.235 "code": -19, 00:16:57.235 "message": "No such device" 00:16:57.235 } 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:57.235 01:42:41 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:57.235 [2024-11-21 01:42:41.170695] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:57.235 [2024-11-21 01:42:41.178629] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:57.235 [2024-11-21 01:42:41.178660] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.235 01:42:41 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.235 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:57.802 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.802 01:42:41 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:57.802 01:42:41 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:57.802 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.802 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:57.802 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.802 01:42:41 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:57.802 01:42:41 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:57.802 01:42:41 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:57.802 01:42:41 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:57.802 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.802 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:57.802 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.802 01:42:41 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:57.802 01:42:41 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:57.802 01:42:41 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:57.802 00:16:57.802 real 0m11.173s 00:16:57.802 user 0m0.507s 00:16:57.802 sys 0m1.071s 00:16:57.802 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.802 01:42:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:57.802 ************************************ 00:16:57.802 END TEST test_create_ublk 00:16:57.802 ************************************ 00:16:57.802 01:42:41 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:57.802 01:42:41 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:57.802 01:42:41 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.802 01:42:41 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:57.802 ************************************ 00:16:57.802 START TEST test_create_multi_ublk 00:16:57.802 ************************************ 00:16:57.802 01:42:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:57.802 01:42:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:57.802 01:42:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.802 01:42:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:57.802 [2024-11-21 01:42:41.678627] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:57.802 [2024-11-21 01:42:41.680175] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:57.802 01:42:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.802 01:42:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:57.802 01:42:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:57.802 01:42:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:57.802 01:42:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:57.802 01:42:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.802 01:42:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:58.061 01:42:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.061 01:42:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:58.061 01:42:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:58.061 01:42:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.061 01:42:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:58.061 [2024-11-21 01:42:41.902739] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:58.061 [2024-11-21 01:42:41.903048] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:58.061 [2024-11-21 01:42:41.903055] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:58.061 [2024-11-21 01:42:41.903063] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:58.061 [2024-11-21 01:42:41.914664] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:58.061 [2024-11-21 01:42:41.914683] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:58.061 [2024-11-21 01:42:41.926640] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:58.061 [2024-11-21 01:42:41.927143] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:58.061 [2024-11-21 01:42:41.961637] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:58.061 01:42:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.061 01:42:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:58.061 01:42:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:58.061 01:42:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:58.061 01:42:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.061 01:42:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:58.319 01:42:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.319 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:58.319 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:58.319 01:42:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.319 01:42:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:58.319 [2024-11-21 01:42:42.180733] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:58.320 [2024-11-21 01:42:42.181032] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:58.320 [2024-11-21 01:42:42.181045] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:58.320 [2024-11-21 01:42:42.181050] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:58.320 [2024-11-21 01:42:42.188650] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:58.320 [2024-11-21 01:42:42.188666] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:58.320 [2024-11-21 01:42:42.196641] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:58.320 [2024-11-21 01:42:42.197135] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:58.320 [2024-11-21 01:42:42.204705] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:58.320 01:42:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.320 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:58.320 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:58.320 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:58.320 01:42:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.320 01:42:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:58.578 01:42:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.578 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:58.578 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:58.578 01:42:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.578 01:42:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:58.578 [2024-11-21 01:42:42.356721] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:58.578 [2024-11-21 01:42:42.357023] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:58.578 [2024-11-21 01:42:42.357035] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:58.578 [2024-11-21 01:42:42.357042] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:58.578 [2024-11-21 01:42:42.364646] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:58.578 [2024-11-21 01:42:42.364665] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:58.578 [2024-11-21 01:42:42.372636] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:58.578 [2024-11-21 01:42:42.373139] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:58.578 [2024-11-21 01:42:42.381667] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:58.578 01:42:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.578 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:58.578 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:58.578 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:58.578 01:42:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.578 01:42:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:58.578 01:42:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.578 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:58.837 [2024-11-21 01:42:42.540740] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:58.837 [2024-11-21 01:42:42.541033] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:58.837 [2024-11-21 01:42:42.541046] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:58.837 [2024-11-21 01:42:42.541051] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:58.837 [2024-11-21 01:42:42.548654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:58.837 [2024-11-21 01:42:42.548671] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:58.837 [2024-11-21 01:42:42.556639] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:58.837 [2024-11-21 01:42:42.557137] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:58.837 [2024-11-21 01:42:42.560487] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:58.837 { 00:16:58.837 "ublk_device": "/dev/ublkb0", 00:16:58.837 "id": 0, 00:16:58.837 "queue_depth": 512, 00:16:58.837 "num_queues": 4, 00:16:58.837 "bdev_name": "Malloc0" 00:16:58.837 }, 00:16:58.837 { 00:16:58.837 "ublk_device": "/dev/ublkb1", 00:16:58.837 "id": 1, 00:16:58.837 "queue_depth": 512, 00:16:58.837 "num_queues": 4, 00:16:58.837 "bdev_name": "Malloc1" 00:16:58.837 }, 00:16:58.837 { 00:16:58.837 "ublk_device": "/dev/ublkb2", 00:16:58.837 "id": 2, 00:16:58.837 "queue_depth": 512, 00:16:58.837 "num_queues": 4, 00:16:58.837 "bdev_name": "Malloc2" 00:16:58.837 }, 00:16:58.837 { 00:16:58.837 "ublk_device": "/dev/ublkb3", 00:16:58.837 "id": 3, 00:16:58.837 "queue_depth": 512, 00:16:58.837 "num_queues": 4, 00:16:58.837 "bdev_name": "Malloc3" 00:16:58.837 } 00:16:58.837 ]' 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:58.837 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:59.096 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:59.096 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:59.096 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:59.096 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:59.096 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:59.096 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:59.096 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:59.096 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.096 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:59.096 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:59.096 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:59.096 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:59.096 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:59.096 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:59.096 01:42:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:59.096 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:59.096 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:59.096 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:59.096 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.096 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:59.355 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:59.355 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:59.355 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:59.355 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:59.355 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:59.355 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:59.355 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:59.355 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:59.355 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:59.355 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:59.355 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:59.355 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.355 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:59.355 01:42:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.355 01:42:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.355 [2024-11-21 01:42:43.200704] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:59.355 [2024-11-21 01:42:43.244684] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:59.355 [2024-11-21 01:42:43.245554] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:59.355 [2024-11-21 01:42:43.252653] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:59.355 [2024-11-21 01:42:43.252906] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:59.355 [2024-11-21 01:42:43.252920] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:59.355 01:42:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.355 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.355 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:59.355 01:42:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.355 01:42:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.355 [2024-11-21 01:42:43.268690] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:59.355 [2024-11-21 01:42:43.308131] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:59.355 [2024-11-21 01:42:43.309215] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:59.614 [2024-11-21 01:42:43.315641] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:59.614 [2024-11-21 01:42:43.315891] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:59.614 [2024-11-21 01:42:43.315904] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:59.614 01:42:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.614 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.614 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:59.614 01:42:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.614 01:42:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.614 [2024-11-21 01:42:43.331717] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:59.614 [2024-11-21 01:42:43.372067] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:59.614 [2024-11-21 01:42:43.373213] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:59.614 [2024-11-21 01:42:43.379643] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:59.614 [2024-11-21 01:42:43.379879] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:59.614 [2024-11-21 01:42:43.379891] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:59.614 01:42:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.614 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.614 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:59.614 01:42:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.614 01:42:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.614 [2024-11-21 01:42:43.393716] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:59.614 [2024-11-21 01:42:43.435659] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:59.614 [2024-11-21 01:42:43.436323] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:59.614 [2024-11-21 01:42:43.449633] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:59.614 [2024-11-21 01:42:43.449882] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:59.614 [2024-11-21 01:42:43.449901] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:59.614 01:42:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.614 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:59.872 [2024-11-21 01:42:43.636677] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:59.872 [2024-11-21 01:42:43.644627] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:59.872 [2024-11-21 01:42:43.644654] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:59.872 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:59.872 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.872 01:42:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:59.872 01:42:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.872 01:42:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.129 01:42:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.129 01:42:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:00.129 01:42:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:00.129 01:42:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.129 01:42:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.695 01:42:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.695 01:42:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:00.695 01:42:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:00.695 01:42:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.695 01:42:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.695 01:42:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.695 01:42:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:00.695 01:42:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:00.695 01:42:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.695 01:42:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.952 01:42:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.952 01:42:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:00.952 01:42:44 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:00.952 01:42:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.952 01:42:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.952 01:42:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.952 01:42:44 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:00.952 01:42:44 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:00.952 01:42:44 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:00.952 01:42:44 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:00.952 01:42:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.952 01:42:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.952 01:42:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.952 01:42:44 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:00.952 01:42:44 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:00.952 01:42:44 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:00.952 00:17:00.952 real 0m3.180s 00:17:00.952 user 0m0.807s 00:17:00.952 sys 0m0.136s 00:17:00.952 01:42:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.952 01:42:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.952 ************************************ 00:17:00.952 END TEST test_create_multi_ublk 00:17:00.952 ************************************ 00:17:00.952 01:42:44 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:00.952 01:42:44 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:00.952 01:42:44 ublk -- ublk/ublk.sh@130 -- # killprocess 73548 00:17:00.952 01:42:44 ublk -- common/autotest_common.sh@954 -- # '[' -z 73548 ']' 00:17:00.952 01:42:44 ublk -- common/autotest_common.sh@958 -- # kill -0 73548 00:17:00.952 01:42:44 ublk -- common/autotest_common.sh@959 -- # uname 00:17:00.952 01:42:44 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.952 01:42:44 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73548 00:17:00.952 01:42:44 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.952 01:42:44 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.952 01:42:44 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73548' 00:17:00.952 killing process with pid 73548 00:17:00.952 01:42:44 ublk -- common/autotest_common.sh@973 -- # kill 73548 00:17:00.952 01:42:44 ublk -- common/autotest_common.sh@978 -- # wait 73548 00:17:01.518 [2024-11-21 01:42:45.433754] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:01.518 [2024-11-21 01:42:45.433798] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:02.455 00:17:02.455 real 0m24.839s 00:17:02.455 user 0m35.370s 00:17:02.455 sys 0m8.869s 00:17:02.455 01:42:46 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.455 ************************************ 00:17:02.455 END TEST ublk 00:17:02.455 ************************************ 00:17:02.455 01:42:46 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:02.455 01:42:46 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:02.455 01:42:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:02.455 01:42:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.455 01:42:46 -- common/autotest_common.sh@10 -- # set +x 00:17:02.455 ************************************ 00:17:02.455 START TEST ublk_recovery 00:17:02.455 ************************************ 00:17:02.455 01:42:46 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:02.455 * Looking for test storage... 00:17:02.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:02.456 01:42:46 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:02.456 01:42:46 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:17:02.456 01:42:46 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:02.456 01:42:46 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:02.456 01:42:46 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:02.456 01:42:46 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:02.456 01:42:46 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:02.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.456 --rc genhtml_branch_coverage=1 00:17:02.456 --rc genhtml_function_coverage=1 00:17:02.456 --rc genhtml_legend=1 00:17:02.456 --rc geninfo_all_blocks=1 00:17:02.456 --rc geninfo_unexecuted_blocks=1 00:17:02.456 00:17:02.456 ' 00:17:02.456 01:42:46 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:02.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.456 --rc genhtml_branch_coverage=1 00:17:02.456 --rc genhtml_function_coverage=1 00:17:02.456 --rc genhtml_legend=1 00:17:02.456 --rc geninfo_all_blocks=1 00:17:02.456 --rc geninfo_unexecuted_blocks=1 00:17:02.456 00:17:02.456 ' 00:17:02.456 01:42:46 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:02.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.456 --rc genhtml_branch_coverage=1 00:17:02.456 --rc genhtml_function_coverage=1 00:17:02.456 --rc genhtml_legend=1 00:17:02.456 --rc geninfo_all_blocks=1 00:17:02.456 --rc geninfo_unexecuted_blocks=1 00:17:02.456 00:17:02.456 ' 00:17:02.456 01:42:46 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:02.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.456 --rc genhtml_branch_coverage=1 00:17:02.456 --rc genhtml_function_coverage=1 00:17:02.456 --rc genhtml_legend=1 00:17:02.456 --rc geninfo_all_blocks=1 00:17:02.456 --rc geninfo_unexecuted_blocks=1 00:17:02.456 00:17:02.456 ' 00:17:02.456 01:42:46 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:02.456 01:42:46 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:02.456 01:42:46 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:02.456 01:42:46 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:02.456 01:42:46 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:02.456 01:42:46 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:02.456 01:42:46 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:02.456 01:42:46 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:02.456 01:42:46 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:02.456 01:42:46 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:02.456 01:42:46 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73943 00:17:02.456 01:42:46 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:02.456 01:42:46 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73943 00:17:02.456 01:42:46 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 73943 ']' 00:17:02.456 01:42:46 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.456 01:42:46 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.456 01:42:46 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:02.456 01:42:46 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.456 01:42:46 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.456 01:42:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:02.456 [2024-11-21 01:42:46.347848] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:17:02.456 [2024-11-21 01:42:46.347992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73943 ] 00:17:02.717 [2024-11-21 01:42:46.507594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:02.717 [2024-11-21 01:42:46.606348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.717 [2024-11-21 01:42:46.606451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.742 01:42:47 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:03.742 01:42:47 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:03.742 01:42:47 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:03.742 01:42:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.742 01:42:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.742 [2024-11-21 01:42:47.292653] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:03.742 [2024-11-21 01:42:47.294999] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:03.742 01:42:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.742 01:42:47 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:03.742 01:42:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.742 01:42:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.742 malloc0 00:17:03.742 01:42:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.742 01:42:47 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:03.742 01:42:47 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.742 01:42:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:03.742 [2024-11-21 01:42:47.405019] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:03.742 [2024-11-21 01:42:47.405117] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:03.742 [2024-11-21 01:42:47.405129] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:03.742 [2024-11-21 01:42:47.405138] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:03.742 [2024-11-21 01:42:47.413732] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:03.742 [2024-11-21 01:42:47.413753] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:03.742 [2024-11-21 01:42:47.420642] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:03.742 [2024-11-21 01:42:47.420782] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:03.742 [2024-11-21 01:42:47.428712] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:03.742 1 00:17:03.742 01:42:47 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.742 01:42:47 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:04.677 01:42:48 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=73979 00:17:04.677 01:42:48 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:04.677 01:42:48 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:04.677 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:04.677 fio-3.35 00:17:04.677 Starting 1 process 00:17:09.944 01:42:53 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73943 00:17:09.944 01:42:53 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:15.234 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73943 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:15.234 01:42:58 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74091 00:17:15.234 01:42:58 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:15.234 01:42:58 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:15.234 01:42:58 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74091 00:17:15.234 01:42:58 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74091 ']' 00:17:15.234 01:42:58 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.234 01:42:58 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.234 01:42:58 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.234 01:42:58 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.234 01:42:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.234 [2024-11-21 01:42:58.532544] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:17:15.234 [2024-11-21 01:42:58.532684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74091 ] 00:17:15.234 [2024-11-21 01:42:58.692395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:15.234 [2024-11-21 01:42:58.792324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.234 [2024-11-21 01:42:58.792400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.493 01:42:59 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.493 01:42:59 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:15.493 01:42:59 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:15.493 01:42:59 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.493 01:42:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.493 [2024-11-21 01:42:59.375638] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:15.493 [2024-11-21 01:42:59.377530] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:15.493 01:42:59 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.493 01:42:59 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:15.493 01:42:59 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.493 01:42:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.751 malloc0 00:17:15.751 01:42:59 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.751 01:42:59 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:15.751 01:42:59 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.751 01:42:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:15.751 [2024-11-21 01:42:59.479780] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:15.751 [2024-11-21 01:42:59.479817] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:15.751 [2024-11-21 01:42:59.479827] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:15.751 [2024-11-21 01:42:59.487663] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:15.751 [2024-11-21 01:42:59.487689] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:17:15.751 [2024-11-21 01:42:59.487697] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:15.751 [2024-11-21 01:42:59.487773] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:15.751 1 00:17:15.751 01:42:59 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.751 01:42:59 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 73979 00:17:15.751 [2024-11-21 01:42:59.495639] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:15.751 [2024-11-21 01:42:59.501043] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:15.751 [2024-11-21 01:42:59.508839] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:15.751 [2024-11-21 01:42:59.508860] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:11.977 00:18:11.977 fio_test: (groupid=0, jobs=1): err= 0: pid=73982: Thu Nov 21 01:43:48 2024 00:18:11.977 read: IOPS=25.9k, BW=101MiB/s (106MB/s)(6075MiB/60005msec) 00:18:11.977 slat (nsec): min=965, max=324809, avg=5088.19, stdev=1684.75 00:18:11.977 clat (usec): min=648, max=6079.0k, avg=2418.67, stdev=38361.56 00:18:11.977 lat (usec): min=682, max=6079.0k, avg=2423.75, stdev=38361.57 00:18:11.977 clat percentiles (usec): 00:18:11.977 | 1.00th=[ 1778], 5.00th=[ 1893], 10.00th=[ 1926], 20.00th=[ 1942], 00:18:11.977 | 30.00th=[ 1975], 40.00th=[ 1991], 50.00th=[ 2008], 60.00th=[ 2040], 00:18:11.977 | 70.00th=[ 2114], 80.00th=[ 2212], 90.00th=[ 2507], 95.00th=[ 3032], 00:18:11.977 | 99.00th=[ 4686], 99.50th=[ 5211], 99.90th=[ 7111], 99.95th=[ 7963], 00:18:11.977 | 99.99th=[12649] 00:18:11.977 bw ( KiB/s): min=12280, max=124352, per=100.00%, avg=114176.35, stdev=16055.55, samples=108 00:18:11.977 iops : min= 3070, max=31088, avg=28544.07, stdev=4013.88, samples=108 00:18:11.977 write: IOPS=25.9k, BW=101MiB/s (106MB/s)(6068MiB/60005msec); 0 zone resets 00:18:11.977 slat (nsec): min=984, max=238003, avg=5171.23, stdev=1660.18 00:18:11.977 clat (usec): min=601, max=6078.8k, avg=2511.33, stdev=39602.40 00:18:11.977 lat (usec): min=605, max=6078.8k, avg=2516.50, stdev=39602.41 00:18:11.977 clat percentiles (usec): 00:18:11.978 | 1.00th=[ 1795], 5.00th=[ 1975], 10.00th=[ 2008], 20.00th=[ 2040], 00:18:11.978 | 30.00th=[ 2057], 40.00th=[ 2073], 50.00th=[ 2114], 60.00th=[ 2114], 00:18:11.978 | 70.00th=[ 2212], 80.00th=[ 2311], 90.00th=[ 2573], 95.00th=[ 2966], 00:18:11.978 | 99.00th=[ 4686], 99.50th=[ 5211], 99.90th=[ 7177], 99.95th=[ 8094], 00:18:11.978 | 99.99th=[13435] 00:18:11.978 bw ( KiB/s): min=11864, max=123576, per=100.00%, avg=114041.86, stdev=16023.36, samples=108 00:18:11.978 iops : min= 2966, max=30894, avg=28510.45, stdev=4005.84, samples=108 00:18:11.978 lat (usec) : 750=0.01%, 1000=0.01% 00:18:11.978 lat (msec) : 2=27.77%, 4=69.90%, 10=2.30%, 20=0.01%, >=2000=0.01% 00:18:11.978 cpu : usr=6.07%, sys=27.34%, ctx=101419, majf=0, minf=13 00:18:11.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:11.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:11.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:11.978 issued rwts: total=1555294,1553474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:11.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:11.978 00:18:11.978 Run status group 0 (all jobs): 00:18:11.978 READ: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=6075MiB (6370MB), run=60005-60005msec 00:18:11.978 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=6068MiB (6363MB), run=60005-60005msec 00:18:11.978 00:18:11.978 Disk stats (read/write): 00:18:11.978 ublkb1: ios=1552038/1550298, merge=0/0, ticks=3653635/3666041, in_queue=7319676, util=99.90% 00:18:11.978 01:43:48 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:11.978 01:43:48 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.978 01:43:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:11.978 [2024-11-21 01:43:48.692338] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:11.978 [2024-11-21 01:43:48.737706] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:11.978 [2024-11-21 01:43:48.737868] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:11.978 [2024-11-21 01:43:48.742651] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:11.978 [2024-11-21 01:43:48.742748] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:11.978 [2024-11-21 01:43:48.742757] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:11.978 01:43:48 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.978 01:43:48 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:11.978 01:43:48 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.978 01:43:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:11.978 [2024-11-21 01:43:48.758720] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:11.978 [2024-11-21 01:43:48.766631] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:11.978 [2024-11-21 01:43:48.766664] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:11.978 01:43:48 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.978 01:43:48 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:11.978 01:43:48 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:11.978 01:43:48 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74091 00:18:11.978 01:43:48 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74091 ']' 00:18:11.978 01:43:48 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74091 00:18:11.978 01:43:48 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:18:11.978 01:43:48 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.978 01:43:48 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74091 00:18:11.978 01:43:48 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:11.978 01:43:48 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:11.978 killing process with pid 74091 00:18:11.978 01:43:48 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74091' 00:18:11.978 01:43:48 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74091 00:18:11.978 01:43:48 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74091 00:18:11.978 [2024-11-21 01:43:49.856049] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:11.978 [2024-11-21 01:43:49.856096] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:11.978 00:18:11.978 real 1m4.493s 00:18:11.978 user 1m43.114s 00:18:11.978 sys 0m35.046s 00:18:11.978 01:43:50 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.978 ************************************ 00:18:11.978 END TEST ublk_recovery 00:18:11.978 ************************************ 00:18:11.978 01:43:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:11.978 01:43:50 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:18:11.978 01:43:50 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:11.978 01:43:50 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:11.978 01:43:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:11.978 01:43:50 -- common/autotest_common.sh@10 -- # set +x 00:18:11.978 01:43:50 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:11.978 01:43:50 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:11.978 01:43:50 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:11.978 01:43:50 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:11.978 01:43:50 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:11.978 01:43:50 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:11.978 01:43:50 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:11.978 01:43:50 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:11.978 01:43:50 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:11.978 01:43:50 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:18:11.978 01:43:50 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:11.978 01:43:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:11.978 01:43:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.978 01:43:50 -- common/autotest_common.sh@10 -- # set +x 00:18:11.978 ************************************ 00:18:11.978 START TEST ftl 00:18:11.978 ************************************ 00:18:11.978 01:43:50 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:11.978 * Looking for test storage... 00:18:11.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:11.978 01:43:50 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:11.978 01:43:50 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:18:11.978 01:43:50 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:11.978 01:43:50 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:11.978 01:43:50 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:11.978 01:43:50 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:11.978 01:43:50 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:11.978 01:43:50 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:11.978 01:43:50 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:11.978 01:43:50 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:11.978 01:43:50 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:11.978 01:43:50 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:11.978 01:43:50 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:11.978 01:43:50 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:11.978 01:43:50 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:11.978 01:43:50 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:11.978 01:43:50 ftl -- scripts/common.sh@345 -- # : 1 00:18:11.978 01:43:50 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:11.978 01:43:50 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:11.978 01:43:50 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:11.978 01:43:50 ftl -- scripts/common.sh@353 -- # local d=1 00:18:11.978 01:43:50 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:11.978 01:43:50 ftl -- scripts/common.sh@355 -- # echo 1 00:18:11.978 01:43:50 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:11.978 01:43:50 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:11.978 01:43:50 ftl -- scripts/common.sh@353 -- # local d=2 00:18:11.978 01:43:50 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:11.978 01:43:50 ftl -- scripts/common.sh@355 -- # echo 2 00:18:11.978 01:43:50 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:11.978 01:43:50 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:11.978 01:43:50 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:11.978 01:43:50 ftl -- scripts/common.sh@368 -- # return 0 00:18:11.978 01:43:50 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:11.978 01:43:50 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:11.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.978 --rc genhtml_branch_coverage=1 00:18:11.978 --rc genhtml_function_coverage=1 00:18:11.978 --rc genhtml_legend=1 00:18:11.978 --rc geninfo_all_blocks=1 00:18:11.978 --rc geninfo_unexecuted_blocks=1 00:18:11.978 00:18:11.978 ' 00:18:11.978 01:43:50 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:11.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.979 --rc genhtml_branch_coverage=1 00:18:11.979 --rc genhtml_function_coverage=1 00:18:11.979 --rc genhtml_legend=1 00:18:11.979 --rc geninfo_all_blocks=1 00:18:11.979 --rc geninfo_unexecuted_blocks=1 00:18:11.979 00:18:11.979 ' 00:18:11.979 01:43:50 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:11.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.979 --rc genhtml_branch_coverage=1 00:18:11.979 --rc genhtml_function_coverage=1 00:18:11.979 --rc genhtml_legend=1 00:18:11.979 --rc geninfo_all_blocks=1 00:18:11.979 --rc geninfo_unexecuted_blocks=1 00:18:11.979 00:18:11.979 ' 00:18:11.979 01:43:50 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:11.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.979 --rc genhtml_branch_coverage=1 00:18:11.979 --rc genhtml_function_coverage=1 00:18:11.979 --rc genhtml_legend=1 00:18:11.979 --rc geninfo_all_blocks=1 00:18:11.979 --rc geninfo_unexecuted_blocks=1 00:18:11.979 00:18:11.979 ' 00:18:11.979 01:43:50 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:11.979 01:43:50 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:11.979 01:43:50 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:11.979 01:43:50 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:11.979 01:43:50 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:11.979 01:43:50 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:11.979 01:43:50 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:11.979 01:43:50 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:11.979 01:43:50 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:11.979 01:43:50 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:11.979 01:43:50 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:11.979 01:43:50 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:11.979 01:43:50 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:11.979 01:43:50 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:11.979 01:43:50 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:11.979 01:43:50 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:11.979 01:43:50 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:11.979 01:43:50 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:11.979 01:43:50 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:11.979 01:43:50 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:11.979 01:43:50 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:11.979 01:43:50 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:11.979 01:43:50 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:11.979 01:43:50 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:11.979 01:43:50 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:11.979 01:43:50 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:11.979 01:43:50 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:11.979 01:43:50 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:11.979 01:43:50 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:11.979 01:43:50 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:11.979 01:43:50 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:11.979 01:43:50 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:11.979 01:43:50 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:11.979 01:43:50 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:11.979 01:43:50 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:11.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:11.979 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:11.979 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:11.979 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:11.979 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:11.979 01:43:51 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74890 00:18:11.979 01:43:51 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74890 00:18:11.979 01:43:51 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:11.979 01:43:51 ftl -- common/autotest_common.sh@835 -- # '[' -z 74890 ']' 00:18:11.979 01:43:51 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.979 01:43:51 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.979 01:43:51 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.979 01:43:51 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.979 01:43:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:11.979 [2024-11-21 01:43:51.365977] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:11.979 [2024-11-21 01:43:51.366140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74890 ] 00:18:11.979 [2024-11-21 01:43:51.531463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.979 [2024-11-21 01:43:51.648484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.979 01:43:52 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.979 01:43:52 ftl -- common/autotest_common.sh@868 -- # return 0 00:18:11.979 01:43:52 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:11.979 01:43:52 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:11.979 01:43:53 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:11.979 01:43:53 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:11.979 01:43:53 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:11.979 01:43:53 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:11.979 01:43:53 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:11.979 01:43:53 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:11.979 01:43:53 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:11.979 01:43:53 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:11.979 01:43:53 ftl -- ftl/ftl.sh@50 -- # break 00:18:11.979 01:43:53 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:11.979 01:43:53 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:11.979 01:43:53 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:11.979 01:43:53 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:11.979 01:43:54 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:11.979 01:43:54 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:11.979 01:43:54 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:11.979 01:43:54 ftl -- ftl/ftl.sh@63 -- # break 00:18:11.979 01:43:54 ftl -- ftl/ftl.sh@66 -- # killprocess 74890 00:18:11.979 01:43:54 ftl -- common/autotest_common.sh@954 -- # '[' -z 74890 ']' 00:18:11.979 01:43:54 ftl -- common/autotest_common.sh@958 -- # kill -0 74890 00:18:11.979 01:43:54 ftl -- common/autotest_common.sh@959 -- # uname 00:18:11.979 01:43:54 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.979 01:43:54 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74890 00:18:11.979 killing process with pid 74890 00:18:11.979 01:43:54 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:11.979 01:43:54 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:11.979 01:43:54 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74890' 00:18:11.979 01:43:54 ftl -- common/autotest_common.sh@973 -- # kill 74890 00:18:11.979 01:43:54 ftl -- common/autotest_common.sh@978 -- # wait 74890 00:18:11.979 01:43:55 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:11.979 01:43:55 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:11.979 01:43:55 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:11.979 01:43:55 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.980 01:43:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:11.980 ************************************ 00:18:11.980 START TEST ftl_fio_basic 00:18:11.980 ************************************ 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:11.980 * Looking for test storage... 00:18:11.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:11.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.980 --rc genhtml_branch_coverage=1 00:18:11.980 --rc genhtml_function_coverage=1 00:18:11.980 --rc genhtml_legend=1 00:18:11.980 --rc geninfo_all_blocks=1 00:18:11.980 --rc geninfo_unexecuted_blocks=1 00:18:11.980 00:18:11.980 ' 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:11.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.980 --rc genhtml_branch_coverage=1 00:18:11.980 --rc genhtml_function_coverage=1 00:18:11.980 --rc genhtml_legend=1 00:18:11.980 --rc geninfo_all_blocks=1 00:18:11.980 --rc geninfo_unexecuted_blocks=1 00:18:11.980 00:18:11.980 ' 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:11.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.980 --rc genhtml_branch_coverage=1 00:18:11.980 --rc genhtml_function_coverage=1 00:18:11.980 --rc genhtml_legend=1 00:18:11.980 --rc geninfo_all_blocks=1 00:18:11.980 --rc geninfo_unexecuted_blocks=1 00:18:11.980 00:18:11.980 ' 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:11.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.980 --rc genhtml_branch_coverage=1 00:18:11.980 --rc genhtml_function_coverage=1 00:18:11.980 --rc genhtml_legend=1 00:18:11.980 --rc geninfo_all_blocks=1 00:18:11.980 --rc geninfo_unexecuted_blocks=1 00:18:11.980 00:18:11.980 ' 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:11.980 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:11.981 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:11.981 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:11.981 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:11.981 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:11.981 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:11.981 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:11.981 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:11.981 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:11.981 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:11.981 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:11.981 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:11.981 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75029 00:18:11.981 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75029 00:18:11.981 01:43:55 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75029 ']' 00:18:11.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.981 01:43:55 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.981 01:43:55 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.981 01:43:55 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.981 01:43:55 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.981 01:43:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:11.981 01:43:55 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:11.981 [2024-11-21 01:43:55.703524] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:18:11.981 [2024-11-21 01:43:55.703633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75029 ] 00:18:11.981 [2024-11-21 01:43:55.856016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:12.239 [2024-11-21 01:43:55.948988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.239 [2024-11-21 01:43:55.949272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.239 [2024-11-21 01:43:55.949280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.805 01:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.805 01:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:12.805 01:43:56 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:12.805 01:43:56 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:12.805 01:43:56 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:12.805 01:43:56 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:12.805 01:43:56 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:12.805 01:43:56 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:13.063 01:43:56 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:13.063 01:43:56 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:13.063 01:43:56 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:13.064 01:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:13.064 01:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:13.064 01:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:13.064 01:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:13.064 01:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:13.064 01:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:13.064 { 00:18:13.064 "name": "nvme0n1", 00:18:13.064 "aliases": [ 00:18:13.064 "bf9c92cc-d7eb-4814-8005-8d032fd0235a" 00:18:13.064 ], 00:18:13.064 "product_name": "NVMe disk", 00:18:13.064 "block_size": 4096, 00:18:13.064 "num_blocks": 1310720, 00:18:13.064 "uuid": "bf9c92cc-d7eb-4814-8005-8d032fd0235a", 00:18:13.064 "numa_id": -1, 00:18:13.064 "assigned_rate_limits": { 00:18:13.064 "rw_ios_per_sec": 0, 00:18:13.064 "rw_mbytes_per_sec": 0, 00:18:13.064 "r_mbytes_per_sec": 0, 00:18:13.064 "w_mbytes_per_sec": 0 00:18:13.064 }, 00:18:13.064 "claimed": false, 00:18:13.064 "zoned": false, 00:18:13.064 "supported_io_types": { 00:18:13.064 "read": true, 00:18:13.064 "write": true, 00:18:13.064 "unmap": true, 00:18:13.064 "flush": true, 00:18:13.064 "reset": true, 00:18:13.064 "nvme_admin": true, 00:18:13.064 "nvme_io": true, 00:18:13.064 "nvme_io_md": false, 00:18:13.064 "write_zeroes": true, 00:18:13.064 "zcopy": false, 00:18:13.064 "get_zone_info": false, 00:18:13.064 "zone_management": false, 00:18:13.064 "zone_append": false, 00:18:13.064 "compare": true, 00:18:13.064 "compare_and_write": false, 00:18:13.064 "abort": true, 00:18:13.064 "seek_hole": false, 00:18:13.064 "seek_data": false, 00:18:13.064 "copy": true, 00:18:13.064 "nvme_iov_md": false 00:18:13.064 }, 00:18:13.064 "driver_specific": { 00:18:13.064 "nvme": [ 00:18:13.064 { 00:18:13.064 "pci_address": "0000:00:11.0", 00:18:13.064 "trid": { 00:18:13.064 "trtype": "PCIe", 00:18:13.064 "traddr": "0000:00:11.0" 00:18:13.064 }, 00:18:13.064 "ctrlr_data": { 00:18:13.064 "cntlid": 0, 00:18:13.064 "vendor_id": "0x1b36", 00:18:13.064 "model_number": "QEMU NVMe Ctrl", 00:18:13.064 "serial_number": "12341", 00:18:13.064 "firmware_revision": "8.0.0", 00:18:13.064 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:13.064 "oacs": { 00:18:13.064 "security": 0, 00:18:13.064 "format": 1, 00:18:13.064 "firmware": 0, 00:18:13.064 "ns_manage": 1 00:18:13.064 }, 00:18:13.064 "multi_ctrlr": false, 00:18:13.064 "ana_reporting": false 00:18:13.064 }, 00:18:13.064 "vs": { 00:18:13.064 "nvme_version": "1.4" 00:18:13.064 }, 00:18:13.064 "ns_data": { 00:18:13.064 "id": 1, 00:18:13.064 "can_share": false 00:18:13.064 } 00:18:13.064 } 00:18:13.064 ], 00:18:13.064 "mp_policy": "active_passive" 00:18:13.064 } 00:18:13.064 } 00:18:13.064 ]' 00:18:13.064 01:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:13.323 01:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:13.323 01:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:13.323 01:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:13.323 01:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:13.323 01:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:13.323 01:43:57 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:13.323 01:43:57 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:13.323 01:43:57 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:13.323 01:43:57 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:13.323 01:43:57 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:13.323 01:43:57 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:13.323 01:43:57 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:13.581 01:43:57 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=fe455cf7-e841-4926-9673-87718b027e13 00:18:13.581 01:43:57 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fe455cf7-e841-4926-9673-87718b027e13 00:18:13.838 01:43:57 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=6e0f1fcf-f2c5-4be8-a3ac-6dfa7be8a0e6 00:18:13.838 01:43:57 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6e0f1fcf-f2c5-4be8-a3ac-6dfa7be8a0e6 00:18:13.838 01:43:57 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:13.838 01:43:57 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:13.838 01:43:57 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=6e0f1fcf-f2c5-4be8-a3ac-6dfa7be8a0e6 00:18:13.838 01:43:57 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:13.838 01:43:57 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 6e0f1fcf-f2c5-4be8-a3ac-6dfa7be8a0e6 00:18:13.838 01:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6e0f1fcf-f2c5-4be8-a3ac-6dfa7be8a0e6 00:18:13.838 01:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:13.838 01:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:13.838 01:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:13.838 01:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e0f1fcf-f2c5-4be8-a3ac-6dfa7be8a0e6 00:18:14.096 01:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:14.096 { 00:18:14.096 "name": "6e0f1fcf-f2c5-4be8-a3ac-6dfa7be8a0e6", 00:18:14.096 "aliases": [ 00:18:14.096 "lvs/nvme0n1p0" 00:18:14.096 ], 00:18:14.097 "product_name": "Logical Volume", 00:18:14.097 "block_size": 4096, 00:18:14.097 "num_blocks": 26476544, 00:18:14.097 "uuid": "6e0f1fcf-f2c5-4be8-a3ac-6dfa7be8a0e6", 00:18:14.097 "assigned_rate_limits": { 00:18:14.097 "rw_ios_per_sec": 0, 00:18:14.097 "rw_mbytes_per_sec": 0, 00:18:14.097 "r_mbytes_per_sec": 0, 00:18:14.097 "w_mbytes_per_sec": 0 00:18:14.097 }, 00:18:14.097 "claimed": false, 00:18:14.097 "zoned": false, 00:18:14.097 "supported_io_types": { 00:18:14.097 "read": true, 00:18:14.097 "write": true, 00:18:14.097 "unmap": true, 00:18:14.097 "flush": false, 00:18:14.097 "reset": true, 00:18:14.097 "nvme_admin": false, 00:18:14.097 "nvme_io": false, 00:18:14.097 "nvme_io_md": false, 00:18:14.097 "write_zeroes": true, 00:18:14.097 "zcopy": false, 00:18:14.097 "get_zone_info": false, 00:18:14.097 "zone_management": false, 00:18:14.097 "zone_append": false, 00:18:14.097 "compare": false, 00:18:14.097 "compare_and_write": false, 00:18:14.097 "abort": false, 00:18:14.097 "seek_hole": true, 00:18:14.097 "seek_data": true, 00:18:14.097 "copy": false, 00:18:14.097 "nvme_iov_md": false 00:18:14.097 }, 00:18:14.097 "driver_specific": { 00:18:14.097 "lvol": { 00:18:14.097 "lvol_store_uuid": "fe455cf7-e841-4926-9673-87718b027e13", 00:18:14.097 "base_bdev": "nvme0n1", 00:18:14.097 "thin_provision": true, 00:18:14.097 "num_allocated_clusters": 0, 00:18:14.097 "snapshot": false, 00:18:14.097 "clone": false, 00:18:14.097 "esnap_clone": false 00:18:14.097 } 00:18:14.097 } 00:18:14.097 } 00:18:14.097 ]' 00:18:14.097 01:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:14.097 01:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:14.097 01:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:14.097 01:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:14.097 01:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:14.097 01:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:14.097 01:43:57 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:14.097 01:43:57 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:14.097 01:43:57 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:14.355 01:43:58 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:14.355 01:43:58 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:14.355 01:43:58 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 6e0f1fcf-f2c5-4be8-a3ac-6dfa7be8a0e6 00:18:14.355 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6e0f1fcf-f2c5-4be8-a3ac-6dfa7be8a0e6 00:18:14.355 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:14.355 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:14.355 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:14.355 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e0f1fcf-f2c5-4be8-a3ac-6dfa7be8a0e6 00:18:14.613 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:14.613 { 00:18:14.613 "name": "6e0f1fcf-f2c5-4be8-a3ac-6dfa7be8a0e6", 00:18:14.613 "aliases": [ 00:18:14.613 "lvs/nvme0n1p0" 00:18:14.613 ], 00:18:14.613 "product_name": "Logical Volume", 00:18:14.613 "block_size": 4096, 00:18:14.613 "num_blocks": 26476544, 00:18:14.613 "uuid": "6e0f1fcf-f2c5-4be8-a3ac-6dfa7be8a0e6", 00:18:14.613 "assigned_rate_limits": { 00:18:14.613 "rw_ios_per_sec": 0, 00:18:14.613 "rw_mbytes_per_sec": 0, 00:18:14.613 "r_mbytes_per_sec": 0, 00:18:14.613 "w_mbytes_per_sec": 0 00:18:14.613 }, 00:18:14.613 "claimed": false, 00:18:14.613 "zoned": false, 00:18:14.613 "supported_io_types": { 00:18:14.613 "read": true, 00:18:14.613 "write": true, 00:18:14.613 "unmap": true, 00:18:14.613 "flush": false, 00:18:14.613 "reset": true, 00:18:14.613 "nvme_admin": false, 00:18:14.613 "nvme_io": false, 00:18:14.613 "nvme_io_md": false, 00:18:14.613 "write_zeroes": true, 00:18:14.613 "zcopy": false, 00:18:14.613 "get_zone_info": false, 00:18:14.613 "zone_management": false, 00:18:14.613 "zone_append": false, 00:18:14.613 "compare": false, 00:18:14.613 "compare_and_write": false, 00:18:14.613 "abort": false, 00:18:14.613 "seek_hole": true, 00:18:14.613 "seek_data": true, 00:18:14.613 "copy": false, 00:18:14.613 "nvme_iov_md": false 00:18:14.613 }, 00:18:14.613 "driver_specific": { 00:18:14.613 "lvol": { 00:18:14.613 "lvol_store_uuid": "fe455cf7-e841-4926-9673-87718b027e13", 00:18:14.613 "base_bdev": "nvme0n1", 00:18:14.613 "thin_provision": true, 00:18:14.613 "num_allocated_clusters": 0, 00:18:14.613 "snapshot": false, 00:18:14.613 "clone": false, 00:18:14.613 "esnap_clone": false 00:18:14.613 } 00:18:14.613 } 00:18:14.613 } 00:18:14.614 ]' 00:18:14.614 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:14.614 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:14.614 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:14.614 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:14.614 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:14.614 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:14.614 01:43:58 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:14.614 01:43:58 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:14.872 01:43:58 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:14.872 01:43:58 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:14.872 01:43:58 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:14.872 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:14.872 01:43:58 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 6e0f1fcf-f2c5-4be8-a3ac-6dfa7be8a0e6 00:18:14.872 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=6e0f1fcf-f2c5-4be8-a3ac-6dfa7be8a0e6 00:18:14.872 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:14.872 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:14.872 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:14.872 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6e0f1fcf-f2c5-4be8-a3ac-6dfa7be8a0e6 00:18:14.872 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:14.872 { 00:18:14.872 "name": "6e0f1fcf-f2c5-4be8-a3ac-6dfa7be8a0e6", 00:18:14.872 "aliases": [ 00:18:14.872 "lvs/nvme0n1p0" 00:18:14.872 ], 00:18:14.872 "product_name": "Logical Volume", 00:18:14.872 "block_size": 4096, 00:18:14.872 "num_blocks": 26476544, 00:18:14.872 "uuid": "6e0f1fcf-f2c5-4be8-a3ac-6dfa7be8a0e6", 00:18:14.872 "assigned_rate_limits": { 00:18:14.872 "rw_ios_per_sec": 0, 00:18:14.872 "rw_mbytes_per_sec": 0, 00:18:14.872 "r_mbytes_per_sec": 0, 00:18:14.872 "w_mbytes_per_sec": 0 00:18:14.872 }, 00:18:14.872 "claimed": false, 00:18:14.873 "zoned": false, 00:18:14.873 "supported_io_types": { 00:18:14.873 "read": true, 00:18:14.873 "write": true, 00:18:14.873 "unmap": true, 00:18:14.873 "flush": false, 00:18:14.873 "reset": true, 00:18:14.873 "nvme_admin": false, 00:18:14.873 "nvme_io": false, 00:18:14.873 "nvme_io_md": false, 00:18:14.873 "write_zeroes": true, 00:18:14.873 "zcopy": false, 00:18:14.873 "get_zone_info": false, 00:18:14.873 "zone_management": false, 00:18:14.873 "zone_append": false, 00:18:14.873 "compare": false, 00:18:14.873 "compare_and_write": false, 00:18:14.873 "abort": false, 00:18:14.873 "seek_hole": true, 00:18:14.873 "seek_data": true, 00:18:14.873 "copy": false, 00:18:14.873 "nvme_iov_md": false 00:18:14.873 }, 00:18:14.873 "driver_specific": { 00:18:14.873 "lvol": { 00:18:14.873 "lvol_store_uuid": "fe455cf7-e841-4926-9673-87718b027e13", 00:18:14.873 "base_bdev": "nvme0n1", 00:18:14.873 "thin_provision": true, 00:18:14.873 "num_allocated_clusters": 0, 00:18:14.873 "snapshot": false, 00:18:14.873 "clone": false, 00:18:14.873 "esnap_clone": false 00:18:14.873 } 00:18:14.873 } 00:18:14.873 } 00:18:14.873 ]' 00:18:14.873 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:14.873 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:14.873 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:14.873 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:14.873 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:14.873 01:43:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:14.873 01:43:58 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:14.873 01:43:58 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:14.873 01:43:58 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6e0f1fcf-f2c5-4be8-a3ac-6dfa7be8a0e6 -c nvc0n1p0 --l2p_dram_limit 60 00:18:15.134 [2024-11-21 01:43:59.005577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.134 [2024-11-21 01:43:59.005624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:15.134 [2024-11-21 01:43:59.005638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:15.134 [2024-11-21 01:43:59.005645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.134 [2024-11-21 01:43:59.005692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.134 [2024-11-21 01:43:59.005701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:15.134 [2024-11-21 01:43:59.005709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:15.134 [2024-11-21 01:43:59.005715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.134 [2024-11-21 01:43:59.005749] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:15.134 [2024-11-21 01:43:59.006260] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:15.134 [2024-11-21 01:43:59.006283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.134 [2024-11-21 01:43:59.006289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:15.134 [2024-11-21 01:43:59.006298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:18:15.134 [2024-11-21 01:43:59.006304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.134 [2024-11-21 01:43:59.006363] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e415f1ba-efdf-4736-8b9a-9b27cab326e2 00:18:15.134 [2024-11-21 01:43:59.007699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.134 [2024-11-21 01:43:59.007731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:15.134 [2024-11-21 01:43:59.007739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:15.134 [2024-11-21 01:43:59.007747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.134 [2024-11-21 01:43:59.014413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.134 [2024-11-21 01:43:59.014442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:15.134 [2024-11-21 01:43:59.014449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.614 ms 00:18:15.134 [2024-11-21 01:43:59.014457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.134 [2024-11-21 01:43:59.014541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.134 [2024-11-21 01:43:59.014551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:15.134 [2024-11-21 01:43:59.014558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:15.134 [2024-11-21 01:43:59.014569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.134 [2024-11-21 01:43:59.014628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.134 [2024-11-21 01:43:59.014638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:15.134 [2024-11-21 01:43:59.014645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:15.134 [2024-11-21 01:43:59.014653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.134 [2024-11-21 01:43:59.014676] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:15.134 [2024-11-21 01:43:59.017910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.134 [2024-11-21 01:43:59.017936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:15.134 [2024-11-21 01:43:59.017946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.237 ms 00:18:15.134 [2024-11-21 01:43:59.017955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.134 [2024-11-21 01:43:59.017991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.134 [2024-11-21 01:43:59.017999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:15.134 [2024-11-21 01:43:59.018007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:15.134 [2024-11-21 01:43:59.018013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.134 [2024-11-21 01:43:59.018033] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:15.134 [2024-11-21 01:43:59.018153] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:15.134 [2024-11-21 01:43:59.018171] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:15.134 [2024-11-21 01:43:59.018182] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:15.134 [2024-11-21 01:43:59.018192] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:15.134 [2024-11-21 01:43:59.018199] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:15.134 [2024-11-21 01:43:59.018207] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:15.134 [2024-11-21 01:43:59.018213] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:15.134 [2024-11-21 01:43:59.018220] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:15.134 [2024-11-21 01:43:59.018226] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:15.134 [2024-11-21 01:43:59.018233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.134 [2024-11-21 01:43:59.018241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:15.134 [2024-11-21 01:43:59.018249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:18:15.134 [2024-11-21 01:43:59.018255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.134 [2024-11-21 01:43:59.018327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.134 [2024-11-21 01:43:59.018338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:15.134 [2024-11-21 01:43:59.018345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:15.134 [2024-11-21 01:43:59.018351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.134 [2024-11-21 01:43:59.018445] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:15.134 [2024-11-21 01:43:59.018452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:15.134 [2024-11-21 01:43:59.018463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:15.134 [2024-11-21 01:43:59.018468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.134 [2024-11-21 01:43:59.018476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:15.134 [2024-11-21 01:43:59.018481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:15.134 [2024-11-21 01:43:59.018488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:15.134 [2024-11-21 01:43:59.018493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:15.134 [2024-11-21 01:43:59.018499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:15.134 [2024-11-21 01:43:59.018504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:15.134 [2024-11-21 01:43:59.018512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:15.134 [2024-11-21 01:43:59.018518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:15.134 [2024-11-21 01:43:59.018524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:15.134 [2024-11-21 01:43:59.018529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:15.134 [2024-11-21 01:43:59.018539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:15.134 [2024-11-21 01:43:59.018544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.134 [2024-11-21 01:43:59.018553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:15.134 [2024-11-21 01:43:59.018559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:15.134 [2024-11-21 01:43:59.018566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.134 [2024-11-21 01:43:59.018571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:15.134 [2024-11-21 01:43:59.018578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:15.134 [2024-11-21 01:43:59.018583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:15.134 [2024-11-21 01:43:59.018590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:15.134 [2024-11-21 01:43:59.018595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:15.134 [2024-11-21 01:43:59.018601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:15.134 [2024-11-21 01:43:59.018606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:15.134 [2024-11-21 01:43:59.018623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:15.134 [2024-11-21 01:43:59.018629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:15.135 [2024-11-21 01:43:59.018637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:15.135 [2024-11-21 01:43:59.018642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:15.135 [2024-11-21 01:43:59.018648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:15.135 [2024-11-21 01:43:59.018653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:15.135 [2024-11-21 01:43:59.018661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:15.135 [2024-11-21 01:43:59.018665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:15.135 [2024-11-21 01:43:59.018672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:15.135 [2024-11-21 01:43:59.018688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:15.135 [2024-11-21 01:43:59.018695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:15.135 [2024-11-21 01:43:59.018700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:15.135 [2024-11-21 01:43:59.018706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:15.135 [2024-11-21 01:43:59.018711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.135 [2024-11-21 01:43:59.018717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:15.135 [2024-11-21 01:43:59.018722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:15.135 [2024-11-21 01:43:59.018729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.135 [2024-11-21 01:43:59.018735] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:15.135 [2024-11-21 01:43:59.018742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:15.135 [2024-11-21 01:43:59.018748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:15.135 [2024-11-21 01:43:59.018757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:15.135 [2024-11-21 01:43:59.018763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:15.135 [2024-11-21 01:43:59.018771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:15.135 [2024-11-21 01:43:59.018777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:15.135 [2024-11-21 01:43:59.018785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:15.135 [2024-11-21 01:43:59.018790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:15.135 [2024-11-21 01:43:59.018797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:15.135 [2024-11-21 01:43:59.018805] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:15.135 [2024-11-21 01:43:59.018815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:15.135 [2024-11-21 01:43:59.018821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:15.135 [2024-11-21 01:43:59.018829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:15.135 [2024-11-21 01:43:59.018835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:15.135 [2024-11-21 01:43:59.018841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:15.135 [2024-11-21 01:43:59.018847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:15.135 [2024-11-21 01:43:59.018853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:15.135 [2024-11-21 01:43:59.018858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:15.135 [2024-11-21 01:43:59.018866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:15.135 [2024-11-21 01:43:59.018871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:15.135 [2024-11-21 01:43:59.018880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:15.135 [2024-11-21 01:43:59.018886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:15.135 [2024-11-21 01:43:59.018894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:15.135 [2024-11-21 01:43:59.018899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:15.135 [2024-11-21 01:43:59.018907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:15.135 [2024-11-21 01:43:59.018912] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:15.135 [2024-11-21 01:43:59.018920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:15.135 [2024-11-21 01:43:59.018927] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:15.135 [2024-11-21 01:43:59.018935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:15.135 [2024-11-21 01:43:59.018941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:15.135 [2024-11-21 01:43:59.018948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:15.135 [2024-11-21 01:43:59.018953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:15.135 [2024-11-21 01:43:59.018961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:15.135 [2024-11-21 01:43:59.018966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:18:15.135 [2024-11-21 01:43:59.018975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:15.135 [2024-11-21 01:43:59.019049] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:15.135 [2024-11-21 01:43:59.019066] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:17.675 [2024-11-21 01:44:01.436695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.675 [2024-11-21 01:44:01.436769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:17.675 [2024-11-21 01:44:01.436789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2417.633 ms 00:18:17.675 [2024-11-21 01:44:01.436799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.675 [2024-11-21 01:44:01.465044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.675 [2024-11-21 01:44:01.465092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:17.675 [2024-11-21 01:44:01.465106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.022 ms 00:18:17.675 [2024-11-21 01:44:01.465117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.675 [2024-11-21 01:44:01.465257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.675 [2024-11-21 01:44:01.465270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:17.675 [2024-11-21 01:44:01.465280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:17.675 [2024-11-21 01:44:01.465292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.675 [2024-11-21 01:44:01.515089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.675 [2024-11-21 01:44:01.515168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:17.675 [2024-11-21 01:44:01.515191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.754 ms 00:18:17.675 [2024-11-21 01:44:01.515209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.675 [2024-11-21 01:44:01.515269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.675 [2024-11-21 01:44:01.515286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:17.675 [2024-11-21 01:44:01.515299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:17.675 [2024-11-21 01:44:01.515313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.675 [2024-11-21 01:44:01.515876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.675 [2024-11-21 01:44:01.515915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:17.675 [2024-11-21 01:44:01.515929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:18:17.675 [2024-11-21 01:44:01.515947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.675 [2024-11-21 01:44:01.516136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.675 [2024-11-21 01:44:01.516153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:17.675 [2024-11-21 01:44:01.516166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:18:17.675 [2024-11-21 01:44:01.516182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.675 [2024-11-21 01:44:01.533801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.675 [2024-11-21 01:44:01.533834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:17.675 [2024-11-21 01:44:01.533844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.582 ms 00:18:17.675 [2024-11-21 01:44:01.533853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.675 [2024-11-21 01:44:01.546103] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:17.675 [2024-11-21 01:44:01.563284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.675 [2024-11-21 01:44:01.563330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:17.675 [2024-11-21 01:44:01.563342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.320 ms 00:18:17.675 [2024-11-21 01:44:01.563352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.675 [2024-11-21 01:44:01.618040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.675 [2024-11-21 01:44:01.618087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:17.675 [2024-11-21 01:44:01.618102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.653 ms 00:18:17.675 [2024-11-21 01:44:01.618110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.675 [2024-11-21 01:44:01.618303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.675 [2024-11-21 01:44:01.618313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:17.675 [2024-11-21 01:44:01.618327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:18:17.675 [2024-11-21 01:44:01.618334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.936 [2024-11-21 01:44:01.641601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.936 [2024-11-21 01:44:01.641645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:17.936 [2024-11-21 01:44:01.641659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.197 ms 00:18:17.936 [2024-11-21 01:44:01.641667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.936 [2024-11-21 01:44:01.664741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.936 [2024-11-21 01:44:01.664771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:17.936 [2024-11-21 01:44:01.664783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.028 ms 00:18:17.936 [2024-11-21 01:44:01.664790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.936 [2024-11-21 01:44:01.665392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.936 [2024-11-21 01:44:01.665414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:17.936 [2024-11-21 01:44:01.665426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:18:17.936 [2024-11-21 01:44:01.665433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.936 [2024-11-21 01:44:01.735329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.936 [2024-11-21 01:44:01.735362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:17.936 [2024-11-21 01:44:01.735377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.853 ms 00:18:17.936 [2024-11-21 01:44:01.735388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.936 [2024-11-21 01:44:01.760968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.936 [2024-11-21 01:44:01.760998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:17.936 [2024-11-21 01:44:01.761011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.489 ms 00:18:17.936 [2024-11-21 01:44:01.761019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.936 [2024-11-21 01:44:01.784853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.936 [2024-11-21 01:44:01.784883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:17.936 [2024-11-21 01:44:01.784894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.788 ms 00:18:17.936 [2024-11-21 01:44:01.784901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.936 [2024-11-21 01:44:01.809410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.936 [2024-11-21 01:44:01.809442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:17.936 [2024-11-21 01:44:01.809454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.457 ms 00:18:17.936 [2024-11-21 01:44:01.809461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.936 [2024-11-21 01:44:01.809507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.936 [2024-11-21 01:44:01.809516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:17.936 [2024-11-21 01:44:01.809529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:17.936 [2024-11-21 01:44:01.809539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.936 [2024-11-21 01:44:01.809641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.936 [2024-11-21 01:44:01.809653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:17.936 [2024-11-21 01:44:01.809664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:17.936 [2024-11-21 01:44:01.809671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.936 [2024-11-21 01:44:01.810707] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2804.649 ms, result 0 00:18:17.936 { 00:18:17.936 "name": "ftl0", 00:18:17.936 "uuid": "e415f1ba-efdf-4736-8b9a-9b27cab326e2" 00:18:17.936 } 00:18:17.936 01:44:01 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:17.936 01:44:01 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:17.936 01:44:01 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:17.936 01:44:01 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:17.936 01:44:01 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:17.936 01:44:01 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:17.936 01:44:01 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:18.197 01:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:18.458 [ 00:18:18.458 { 00:18:18.458 "name": "ftl0", 00:18:18.458 "aliases": [ 00:18:18.458 "e415f1ba-efdf-4736-8b9a-9b27cab326e2" 00:18:18.458 ], 00:18:18.458 "product_name": "FTL disk", 00:18:18.458 "block_size": 4096, 00:18:18.458 "num_blocks": 20971520, 00:18:18.458 "uuid": "e415f1ba-efdf-4736-8b9a-9b27cab326e2", 00:18:18.458 "assigned_rate_limits": { 00:18:18.458 "rw_ios_per_sec": 0, 00:18:18.458 "rw_mbytes_per_sec": 0, 00:18:18.458 "r_mbytes_per_sec": 0, 00:18:18.458 "w_mbytes_per_sec": 0 00:18:18.458 }, 00:18:18.458 "claimed": false, 00:18:18.458 "zoned": false, 00:18:18.458 "supported_io_types": { 00:18:18.458 "read": true, 00:18:18.458 "write": true, 00:18:18.458 "unmap": true, 00:18:18.458 "flush": true, 00:18:18.458 "reset": false, 00:18:18.458 "nvme_admin": false, 00:18:18.458 "nvme_io": false, 00:18:18.458 "nvme_io_md": false, 00:18:18.458 "write_zeroes": true, 00:18:18.458 "zcopy": false, 00:18:18.458 "get_zone_info": false, 00:18:18.458 "zone_management": false, 00:18:18.458 "zone_append": false, 00:18:18.458 "compare": false, 00:18:18.458 "compare_and_write": false, 00:18:18.458 "abort": false, 00:18:18.458 "seek_hole": false, 00:18:18.458 "seek_data": false, 00:18:18.458 "copy": false, 00:18:18.458 "nvme_iov_md": false 00:18:18.458 }, 00:18:18.458 "driver_specific": { 00:18:18.458 "ftl": { 00:18:18.458 "base_bdev": "6e0f1fcf-f2c5-4be8-a3ac-6dfa7be8a0e6", 00:18:18.458 "cache": "nvc0n1p0" 00:18:18.458 } 00:18:18.458 } 00:18:18.458 } 00:18:18.458 ] 00:18:18.458 01:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:18.458 01:44:02 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:18.458 01:44:02 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:18.718 01:44:02 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:18.718 01:44:02 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:18.718 [2024-11-21 01:44:02.603359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.718 [2024-11-21 01:44:02.603389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:18.718 [2024-11-21 01:44:02.603398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:18.718 [2024-11-21 01:44:02.603406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.718 [2024-11-21 01:44:02.603434] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:18.718 [2024-11-21 01:44:02.605708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.718 [2024-11-21 01:44:02.605728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:18.718 [2024-11-21 01:44:02.605739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.260 ms 00:18:18.718 [2024-11-21 01:44:02.605746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.718 [2024-11-21 01:44:02.606124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.718 [2024-11-21 01:44:02.606140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:18.718 [2024-11-21 01:44:02.606149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:18:18.718 [2024-11-21 01:44:02.606156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.718 [2024-11-21 01:44:02.608631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.718 [2024-11-21 01:44:02.608651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:18.718 [2024-11-21 01:44:02.608661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.454 ms 00:18:18.718 [2024-11-21 01:44:02.608669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.718 [2024-11-21 01:44:02.613589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.718 [2024-11-21 01:44:02.613619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:18.718 [2024-11-21 01:44:02.613629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.897 ms 00:18:18.718 [2024-11-21 01:44:02.613636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.718 [2024-11-21 01:44:02.631815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.718 [2024-11-21 01:44:02.631842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:18.718 [2024-11-21 01:44:02.631852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.107 ms 00:18:18.718 [2024-11-21 01:44:02.631857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.718 [2024-11-21 01:44:02.644462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.718 [2024-11-21 01:44:02.644490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:18.718 [2024-11-21 01:44:02.644502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.552 ms 00:18:18.718 [2024-11-21 01:44:02.644510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.718 [2024-11-21 01:44:02.644663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.718 [2024-11-21 01:44:02.644679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:18.718 [2024-11-21 01:44:02.644689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:18:18.718 [2024-11-21 01:44:02.644695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.718 [2024-11-21 01:44:02.663011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.718 [2024-11-21 01:44:02.663038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:18.718 [2024-11-21 01:44:02.663047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.292 ms 00:18:18.718 [2024-11-21 01:44:02.663053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.980 [2024-11-21 01:44:02.680464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.980 [2024-11-21 01:44:02.680491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:18.980 [2024-11-21 01:44:02.680500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.376 ms 00:18:18.980 [2024-11-21 01:44:02.680506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.980 [2024-11-21 01:44:02.697629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.980 [2024-11-21 01:44:02.697654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:18.980 [2024-11-21 01:44:02.697663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.084 ms 00:18:18.980 [2024-11-21 01:44:02.697668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.980 [2024-11-21 01:44:02.714624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.980 [2024-11-21 01:44:02.714650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:18.980 [2024-11-21 01:44:02.714659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.867 ms 00:18:18.980 [2024-11-21 01:44:02.714664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.980 [2024-11-21 01:44:02.714700] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:18.980 [2024-11-21 01:44:02.714712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.714996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:18.980 [2024-11-21 01:44:02.715230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:18.981 [2024-11-21 01:44:02.715420] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:18.981 [2024-11-21 01:44:02.715428] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e415f1ba-efdf-4736-8b9a-9b27cab326e2 00:18:18.981 [2024-11-21 01:44:02.715434] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:18.981 [2024-11-21 01:44:02.715443] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:18.981 [2024-11-21 01:44:02.715449] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:18.981 [2024-11-21 01:44:02.715457] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:18.981 [2024-11-21 01:44:02.715464] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:18.981 [2024-11-21 01:44:02.715473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:18.981 [2024-11-21 01:44:02.715478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:18.981 [2024-11-21 01:44:02.715484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:18.981 [2024-11-21 01:44:02.715489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:18.981 [2024-11-21 01:44:02.715497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.981 [2024-11-21 01:44:02.715503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:18.981 [2024-11-21 01:44:02.715510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.798 ms 00:18:18.981 [2024-11-21 01:44:02.715516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-21 01:44:02.725448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.981 [2024-11-21 01:44:02.725477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:18.981 [2024-11-21 01:44:02.725486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.897 ms 00:18:18.981 [2024-11-21 01:44:02.725492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-21 01:44:02.725786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.981 [2024-11-21 01:44:02.725800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:18.981 [2024-11-21 01:44:02.725809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:18:18.981 [2024-11-21 01:44:02.725814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-21 01:44:02.762472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.981 [2024-11-21 01:44:02.762502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:18.981 [2024-11-21 01:44:02.762511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.981 [2024-11-21 01:44:02.762518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-21 01:44:02.762579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.981 [2024-11-21 01:44:02.762585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:18.981 [2024-11-21 01:44:02.762593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.981 [2024-11-21 01:44:02.762599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-21 01:44:02.762680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.981 [2024-11-21 01:44:02.762689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:18.981 [2024-11-21 01:44:02.762700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.981 [2024-11-21 01:44:02.762705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-21 01:44:02.762729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.981 [2024-11-21 01:44:02.762737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:18.981 [2024-11-21 01:44:02.762745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.981 [2024-11-21 01:44:02.762751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-21 01:44:02.828400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.981 [2024-11-21 01:44:02.828437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:18.981 [2024-11-21 01:44:02.828447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.981 [2024-11-21 01:44:02.828454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-21 01:44:02.879295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.981 [2024-11-21 01:44:02.879329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:18.981 [2024-11-21 01:44:02.879340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.981 [2024-11-21 01:44:02.879347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-21 01:44:02.879442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.981 [2024-11-21 01:44:02.879450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:18.981 [2024-11-21 01:44:02.879458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.981 [2024-11-21 01:44:02.879467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-21 01:44:02.879516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.981 [2024-11-21 01:44:02.879524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:18.981 [2024-11-21 01:44:02.879532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.981 [2024-11-21 01:44:02.879538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-21 01:44:02.879639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.981 [2024-11-21 01:44:02.879648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:18.981 [2024-11-21 01:44:02.879656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.981 [2024-11-21 01:44:02.879662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-21 01:44:02.879713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.981 [2024-11-21 01:44:02.879721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:18.981 [2024-11-21 01:44:02.879729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.981 [2024-11-21 01:44:02.879735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-21 01:44:02.879782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.981 [2024-11-21 01:44:02.879788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:18.981 [2024-11-21 01:44:02.879796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.981 [2024-11-21 01:44:02.879802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-21 01:44:02.879851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:18.981 [2024-11-21 01:44:02.879859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:18.981 [2024-11-21 01:44:02.879866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:18.981 [2024-11-21 01:44:02.879872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.981 [2024-11-21 01:44:02.880033] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 276.630 ms, result 0 00:18:18.981 true 00:18:18.981 01:44:02 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75029 00:18:18.981 01:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75029 ']' 00:18:18.981 01:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75029 00:18:18.981 01:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:18.981 01:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.981 01:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75029 00:18:18.981 killing process with pid 75029 00:18:18.981 01:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:18.981 01:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:18.982 01:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75029' 00:18:18.982 01:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75029 00:18:18.982 01:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75029 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:25.561 01:44:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:25.561 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:25.561 fio-3.35 00:18:25.561 Starting 1 thread 00:18:30.845 00:18:30.845 test: (groupid=0, jobs=1): err= 0: pid=75202: Thu Nov 21 01:44:14 2024 00:18:30.845 read: IOPS=961, BW=63.8MiB/s (66.9MB/s)(255MiB/3988msec) 00:18:30.845 slat (nsec): min=3981, max=49973, avg=6201.98, stdev=3025.00 00:18:30.845 clat (usec): min=263, max=1533, avg=470.40, stdev=200.35 00:18:30.845 lat (usec): min=268, max=1560, avg=476.60, stdev=201.72 00:18:30.845 clat percentiles (usec): 00:18:30.845 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 322], 20.00th=[ 326], 00:18:30.845 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 355], 60.00th=[ 437], 00:18:30.845 | 70.00th=[ 519], 80.00th=[ 594], 90.00th=[ 816], 95.00th=[ 922], 00:18:30.845 | 99.00th=[ 1057], 99.50th=[ 1156], 99.90th=[ 1319], 99.95th=[ 1434], 00:18:30.845 | 99.99th=[ 1532] 00:18:30.845 write: IOPS=967, BW=64.3MiB/s (67.4MB/s)(256MiB/3985msec); 0 zone resets 00:18:30.845 slat (nsec): min=14554, max=61282, avg=19992.21, stdev=4418.18 00:18:30.845 clat (usec): min=299, max=2104, avg=527.08, stdev=234.78 00:18:30.845 lat (usec): min=315, max=2129, avg=547.07, stdev=236.58 00:18:30.845 clat percentiles (usec): 00:18:30.845 | 1.00th=[ 338], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 355], 00:18:30.845 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 412], 60.00th=[ 494], 00:18:30.845 | 70.00th=[ 594], 80.00th=[ 693], 90.00th=[ 898], 95.00th=[ 988], 00:18:30.845 | 99.00th=[ 1303], 99.50th=[ 1500], 99.90th=[ 1811], 99.95th=[ 1827], 00:18:30.845 | 99.99th=[ 2114] 00:18:30.845 bw ( KiB/s): min=37400, max=90168, per=100.00%, avg=66251.43, stdev=20127.30, samples=7 00:18:30.845 iops : min= 550, max= 1326, avg=974.29, stdev=295.99, samples=7 00:18:30.845 lat (usec) : 500=64.23%, 750=20.98%, 1000=11.67% 00:18:30.845 lat (msec) : 2=3.11%, 4=0.01% 00:18:30.845 cpu : usr=99.07%, sys=0.18%, ctx=5, majf=0, minf=1169 00:18:30.845 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:30.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.845 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.845 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:30.845 00:18:30.845 Run status group 0 (all jobs): 00:18:30.845 READ: bw=63.8MiB/s (66.9MB/s), 63.8MiB/s-63.8MiB/s (66.9MB/s-66.9MB/s), io=255MiB (267MB), run=3988-3988msec 00:18:30.845 WRITE: bw=64.3MiB/s (67.4MB/s), 64.3MiB/s-64.3MiB/s (67.4MB/s-67.4MB/s), io=256MiB (269MB), run=3985-3985msec 00:18:31.856 ----------------------------------------------------- 00:18:31.856 Suppressions used: 00:18:31.856 count bytes template 00:18:31.856 1 5 /usr/src/fio/parse.c 00:18:31.856 1 8 libtcmalloc_minimal.so 00:18:31.856 1 904 libcrypto.so 00:18:31.856 ----------------------------------------------------- 00:18:31.856 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:31.857 01:44:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:32.117 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:32.117 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:32.117 fio-3.35 00:18:32.117 Starting 2 threads 00:18:58.694 00:18:58.694 first_half: (groupid=0, jobs=1): err= 0: pid=75305: Thu Nov 21 01:44:41 2024 00:18:58.694 read: IOPS=2753, BW=10.8MiB/s (11.3MB/s)(255MiB/23695msec) 00:18:58.694 slat (usec): min=3, max=328, avg= 5.21, stdev= 1.96 00:18:58.694 clat (usec): min=659, max=457860, avg=36191.71, stdev=20305.63 00:18:58.694 lat (usec): min=664, max=457870, avg=36196.92, stdev=20305.73 00:18:58.694 clat percentiles (msec): 00:18:58.694 | 1.00th=[ 6], 5.00th=[ 28], 10.00th=[ 31], 20.00th=[ 31], 00:18:58.694 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:18:58.694 | 70.00th=[ 36], 80.00th=[ 38], 90.00th=[ 42], 95.00th=[ 52], 00:18:58.694 | 99.00th=[ 134], 99.50th=[ 157], 99.90th=[ 245], 99.95th=[ 376], 00:18:58.694 | 99.99th=[ 447] 00:18:58.694 write: IOPS=3551, BW=13.9MiB/s (14.5MB/s)(256MiB/18452msec); 0 zone resets 00:18:58.694 slat (usec): min=3, max=2815, avg= 6.85, stdev=15.61 00:18:58.694 clat (usec): min=358, max=93955, avg=10221.45, stdev=16425.69 00:18:58.694 lat (usec): min=362, max=93960, avg=10228.30, stdev=16425.85 00:18:58.694 clat percentiles (usec): 00:18:58.694 | 1.00th=[ 709], 5.00th=[ 889], 10.00th=[ 1020], 20.00th=[ 1352], 00:18:58.694 | 30.00th=[ 2409], 40.00th=[ 3720], 50.00th=[ 4621], 60.00th=[ 5604], 00:18:58.694 | 70.00th=[ 7046], 80.00th=[13698], 90.00th=[20579], 95.00th=[56361], 00:18:58.694 | 99.00th=[82314], 99.50th=[84411], 99.90th=[89654], 99.95th=[90702], 00:18:58.694 | 99.99th=[92799] 00:18:58.694 bw ( KiB/s): min= 16, max=41328, per=100.00%, avg=26211.75, stdev=13265.38, samples=20 00:18:58.694 iops : min= 4, max=10332, avg=6552.90, stdev=3316.34, samples=20 00:18:58.695 lat (usec) : 500=0.04%, 750=0.80%, 1000=3.90% 00:18:58.695 lat (msec) : 2=9.43%, 4=7.91%, 10=15.84%, 20=7.67%, 50=48.69% 00:18:58.695 lat (msec) : 100=4.57%, 250=1.11%, 500=0.05% 00:18:58.695 cpu : usr=99.19%, sys=0.14%, ctx=34, majf=0, minf=5587 00:18:58.695 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:58.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.695 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:58.695 issued rwts: total=65240,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.695 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:58.695 second_half: (groupid=0, jobs=1): err= 0: pid=75306: Thu Nov 21 01:44:41 2024 00:18:58.695 read: IOPS=2730, BW=10.7MiB/s (11.2MB/s)(255MiB/23897msec) 00:18:58.695 slat (nsec): min=3043, max=39580, avg=5146.52, stdev=1384.94 00:18:58.695 clat (usec): min=574, max=446363, avg=35637.55, stdev=22241.77 00:18:58.695 lat (usec): min=578, max=446371, avg=35642.69, stdev=22241.88 00:18:58.695 clat percentiles (msec): 00:18:58.695 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 30], 20.00th=[ 31], 00:18:58.695 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:18:58.695 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 41], 95.00th=[ 48], 00:18:58.695 | 99.00th=[ 157], 99.50th=[ 174], 99.90th=[ 262], 99.95th=[ 342], 00:18:58.695 | 99.99th=[ 443] 00:18:58.695 write: IOPS=3208, BW=12.5MiB/s (13.1MB/s)(256MiB/20424msec); 0 zone resets 00:18:58.695 slat (usec): min=3, max=4155, avg= 6.85, stdev=19.15 00:18:58.695 clat (usec): min=332, max=92320, avg=11171.65, stdev=17391.96 00:18:58.695 lat (usec): min=338, max=92326, avg=11178.50, stdev=17392.20 00:18:58.695 clat percentiles (usec): 00:18:58.695 | 1.00th=[ 668], 5.00th=[ 857], 10.00th=[ 1020], 20.00th=[ 1385], 00:18:58.695 | 30.00th=[ 2442], 40.00th=[ 3261], 50.00th=[ 4178], 60.00th=[ 5276], 00:18:58.695 | 70.00th=[ 7767], 80.00th=[16450], 90.00th=[31851], 95.00th=[57410], 00:18:58.695 | 99.00th=[83362], 99.50th=[85459], 99.90th=[89654], 99.95th=[90702], 00:18:58.695 | 99.99th=[91751] 00:18:58.695 bw ( KiB/s): min= 8, max=42520, per=85.09%, avg=21843.04, stdev=13154.78, samples=24 00:18:58.695 iops : min= 2, max=10630, avg=5460.75, stdev=3288.70, samples=24 00:18:58.695 lat (usec) : 500=0.02%, 750=1.16%, 1000=3.54% 00:18:58.695 lat (msec) : 2=9.17%, 4=10.71%, 10=13.12%, 20=7.18%, 50=49.49% 00:18:58.695 lat (msec) : 100=4.41%, 250=1.16%, 500=0.06% 00:18:58.695 cpu : usr=99.35%, sys=0.14%, ctx=34, majf=0, minf=5528 00:18:58.695 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:58.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.695 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:58.695 issued rwts: total=65251,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.695 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:58.695 00:18:58.695 Run status group 0 (all jobs): 00:18:58.695 READ: bw=21.3MiB/s (22.4MB/s), 10.7MiB/s-10.8MiB/s (11.2MB/s-11.3MB/s), io=510MiB (534MB), run=23695-23897msec 00:18:58.695 WRITE: bw=25.1MiB/s (26.3MB/s), 12.5MiB/s-13.9MiB/s (13.1MB/s-14.5MB/s), io=512MiB (537MB), run=18452-20424msec 00:18:58.954 ----------------------------------------------------- 00:18:58.954 Suppressions used: 00:18:58.954 count bytes template 00:18:58.954 2 10 /usr/src/fio/parse.c 00:18:58.954 2 192 /usr/src/fio/iolog.c 00:18:58.954 1 8 libtcmalloc_minimal.so 00:18:58.954 1 904 libcrypto.so 00:18:58.954 ----------------------------------------------------- 00:18:58.954 00:18:58.954 01:44:42 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:58.954 01:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:58.954 01:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:59.215 01:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:59.215 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:59.215 fio-3.35 00:18:59.215 Starting 1 thread 00:19:14.104 00:19:14.104 test: (groupid=0, jobs=1): err= 0: pid=75613: Thu Nov 21 01:44:56 2024 00:19:14.104 read: IOPS=7923, BW=31.0MiB/s (32.5MB/s)(255MiB/8229msec) 00:19:14.104 slat (nsec): min=3052, max=18386, avg=4650.96, stdev=1039.89 00:19:14.104 clat (usec): min=517, max=31839, avg=16146.44, stdev=1837.41 00:19:14.105 lat (usec): min=521, max=31843, avg=16151.09, stdev=1837.43 00:19:14.105 clat percentiles (usec): 00:19:14.105 | 1.00th=[13698], 5.00th=[13960], 10.00th=[15139], 20.00th=[15401], 00:19:14.105 | 30.00th=[15533], 40.00th=[15664], 50.00th=[15795], 60.00th=[15926], 00:19:14.105 | 70.00th=[16188], 80.00th=[16319], 90.00th=[17171], 95.00th=[20055], 00:19:14.105 | 99.00th=[23987], 99.50th=[25035], 99.90th=[28705], 99.95th=[30278], 00:19:14.105 | 99.99th=[31065] 00:19:14.105 write: IOPS=16.5k, BW=64.4MiB/s (67.5MB/s)(256MiB/3975msec); 0 zone resets 00:19:14.105 slat (usec): min=3, max=140, avg= 6.21, stdev= 2.11 00:19:14.105 clat (usec): min=439, max=42997, avg=7726.11, stdev=9039.01 00:19:14.105 lat (usec): min=445, max=43003, avg=7732.32, stdev=9039.01 00:19:14.105 clat percentiles (usec): 00:19:14.105 | 1.00th=[ 586], 5.00th=[ 734], 10.00th=[ 807], 20.00th=[ 914], 00:19:14.105 | 30.00th=[ 1029], 40.00th=[ 1352], 50.00th=[ 4948], 60.00th=[ 6652], 00:19:14.105 | 70.00th=[ 8291], 80.00th=[10028], 90.00th=[26084], 95.00th=[27395], 00:19:14.105 | 99.00th=[33424], 99.50th=[35914], 99.90th=[41157], 99.95th=[41681], 00:19:14.105 | 99.99th=[42206] 00:19:14.105 bw ( KiB/s): min=53160, max=91584, per=99.38%, avg=65536.00, stdev=11232.22, samples=8 00:19:14.105 iops : min=13290, max=22896, avg=16384.00, stdev=2808.05, samples=8 00:19:14.105 lat (usec) : 500=0.03%, 750=2.95%, 1000=11.14% 00:19:14.105 lat (msec) : 2=6.59%, 4=0.88%, 10=18.54%, 20=49.51%, 50=10.36% 00:19:14.105 cpu : usr=99.16%, sys=0.18%, ctx=20, majf=0, minf=5565 00:19:14.105 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:14.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.105 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.105 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.105 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.105 00:19:14.105 Run status group 0 (all jobs): 00:19:14.105 READ: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=255MiB (267MB), run=8229-8229msec 00:19:14.105 WRITE: bw=64.4MiB/s (67.5MB/s), 64.4MiB/s-64.4MiB/s (67.5MB/s-67.5MB/s), io=256MiB (268MB), run=3975-3975msec 00:19:14.105 ----------------------------------------------------- 00:19:14.105 Suppressions used: 00:19:14.105 count bytes template 00:19:14.105 1 5 /usr/src/fio/parse.c 00:19:14.105 2 192 /usr/src/fio/iolog.c 00:19:14.105 1 8 libtcmalloc_minimal.so 00:19:14.105 1 904 libcrypto.so 00:19:14.105 ----------------------------------------------------- 00:19:14.105 00:19:14.105 01:44:58 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:14.105 01:44:58 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:14.105 01:44:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:14.105 01:44:58 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:14.105 01:44:58 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:14.105 Remove shared memory files 00:19:14.105 01:44:58 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:14.105 01:44:58 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:14.105 01:44:58 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:14.105 01:44:58 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57100 /dev/shm/spdk_tgt_trace.pid73943 00:19:14.367 01:44:58 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:14.367 01:44:58 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:14.367 ************************************ 00:19:14.367 END TEST ftl_fio_basic 00:19:14.367 ************************************ 00:19:14.367 00:19:14.367 real 1m2.587s 00:19:14.367 user 2m17.812s 00:19:14.367 sys 0m2.904s 00:19:14.367 01:44:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.367 01:44:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:14.367 01:44:58 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:14.367 01:44:58 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:14.367 01:44:58 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.367 01:44:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:14.367 ************************************ 00:19:14.367 START TEST ftl_bdevperf 00:19:14.367 ************************************ 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:14.367 * Looking for test storage... 00:19:14.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:14.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.367 --rc genhtml_branch_coverage=1 00:19:14.367 --rc genhtml_function_coverage=1 00:19:14.367 --rc genhtml_legend=1 00:19:14.367 --rc geninfo_all_blocks=1 00:19:14.367 --rc geninfo_unexecuted_blocks=1 00:19:14.367 00:19:14.367 ' 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:14.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.367 --rc genhtml_branch_coverage=1 00:19:14.367 --rc genhtml_function_coverage=1 00:19:14.367 --rc genhtml_legend=1 00:19:14.367 --rc geninfo_all_blocks=1 00:19:14.367 --rc geninfo_unexecuted_blocks=1 00:19:14.367 00:19:14.367 ' 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:14.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.367 --rc genhtml_branch_coverage=1 00:19:14.367 --rc genhtml_function_coverage=1 00:19:14.367 --rc genhtml_legend=1 00:19:14.367 --rc geninfo_all_blocks=1 00:19:14.367 --rc geninfo_unexecuted_blocks=1 00:19:14.367 00:19:14.367 ' 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:14.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.367 --rc genhtml_branch_coverage=1 00:19:14.367 --rc genhtml_function_coverage=1 00:19:14.367 --rc genhtml_legend=1 00:19:14.367 --rc geninfo_all_blocks=1 00:19:14.367 --rc geninfo_unexecuted_blocks=1 00:19:14.367 00:19:14.367 ' 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75840 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75840 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 75840 ']' 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.367 01:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:14.626 [2024-11-21 01:44:58.329377] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:19:14.626 [2024-11-21 01:44:58.329580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75840 ] 00:19:14.626 [2024-11-21 01:44:58.479978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.626 [2024-11-21 01:44:58.574154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.192 01:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.192 01:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:15.192 01:44:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:15.192 01:44:59 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:15.192 01:44:59 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:15.193 01:44:59 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:15.193 01:44:59 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:15.193 01:44:59 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:15.762 01:44:59 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:15.762 01:44:59 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:15.762 01:44:59 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:15.762 01:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:15.762 01:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:15.762 01:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:15.762 01:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:15.762 01:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:15.762 01:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:15.762 { 00:19:15.762 "name": "nvme0n1", 00:19:15.762 "aliases": [ 00:19:15.762 "3c2103cf-c968-4424-ae1a-282964896987" 00:19:15.762 ], 00:19:15.762 "product_name": "NVMe disk", 00:19:15.762 "block_size": 4096, 00:19:15.762 "num_blocks": 1310720, 00:19:15.762 "uuid": "3c2103cf-c968-4424-ae1a-282964896987", 00:19:15.762 "numa_id": -1, 00:19:15.762 "assigned_rate_limits": { 00:19:15.762 "rw_ios_per_sec": 0, 00:19:15.762 "rw_mbytes_per_sec": 0, 00:19:15.762 "r_mbytes_per_sec": 0, 00:19:15.762 "w_mbytes_per_sec": 0 00:19:15.762 }, 00:19:15.762 "claimed": true, 00:19:15.762 "claim_type": "read_many_write_one", 00:19:15.762 "zoned": false, 00:19:15.762 "supported_io_types": { 00:19:15.762 "read": true, 00:19:15.762 "write": true, 00:19:15.762 "unmap": true, 00:19:15.762 "flush": true, 00:19:15.762 "reset": true, 00:19:15.762 "nvme_admin": true, 00:19:15.762 "nvme_io": true, 00:19:15.762 "nvme_io_md": false, 00:19:15.762 "write_zeroes": true, 00:19:15.762 "zcopy": false, 00:19:15.762 "get_zone_info": false, 00:19:15.762 "zone_management": false, 00:19:15.762 "zone_append": false, 00:19:15.762 "compare": true, 00:19:15.762 "compare_and_write": false, 00:19:15.762 "abort": true, 00:19:15.762 "seek_hole": false, 00:19:15.762 "seek_data": false, 00:19:15.762 "copy": true, 00:19:15.762 "nvme_iov_md": false 00:19:15.762 }, 00:19:15.762 "driver_specific": { 00:19:15.762 "nvme": [ 00:19:15.762 { 00:19:15.762 "pci_address": "0000:00:11.0", 00:19:15.762 "trid": { 00:19:15.762 "trtype": "PCIe", 00:19:15.762 "traddr": "0000:00:11.0" 00:19:15.762 }, 00:19:15.762 "ctrlr_data": { 00:19:15.762 "cntlid": 0, 00:19:15.762 "vendor_id": "0x1b36", 00:19:15.762 "model_number": "QEMU NVMe Ctrl", 00:19:15.762 "serial_number": "12341", 00:19:15.762 "firmware_revision": "8.0.0", 00:19:15.762 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:15.762 "oacs": { 00:19:15.762 "security": 0, 00:19:15.762 "format": 1, 00:19:15.762 "firmware": 0, 00:19:15.762 "ns_manage": 1 00:19:15.762 }, 00:19:15.762 "multi_ctrlr": false, 00:19:15.762 "ana_reporting": false 00:19:15.762 }, 00:19:15.762 "vs": { 00:19:15.762 "nvme_version": "1.4" 00:19:15.762 }, 00:19:15.762 "ns_data": { 00:19:15.762 "id": 1, 00:19:15.762 "can_share": false 00:19:15.762 } 00:19:15.762 } 00:19:15.762 ], 00:19:15.762 "mp_policy": "active_passive" 00:19:15.762 } 00:19:15.762 } 00:19:15.762 ]' 00:19:15.762 01:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:15.762 01:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:15.762 01:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:15.762 01:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:15.762 01:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:15.762 01:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:15.762 01:44:59 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:15.762 01:44:59 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:15.762 01:44:59 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:15.762 01:44:59 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:15.762 01:44:59 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:16.021 01:44:59 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=fe455cf7-e841-4926-9673-87718b027e13 00:19:16.021 01:44:59 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:16.021 01:44:59 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fe455cf7-e841-4926-9673-87718b027e13 00:19:16.282 01:45:00 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:16.543 01:45:00 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=e6e903d9-e74e-41de-b71f-d1f675296a1e 00:19:16.543 01:45:00 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e6e903d9-e74e-41de-b71f-d1f675296a1e 00:19:16.806 01:45:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=518fb87a-d211-4f5e-87f3-04cf32adb04e 00:19:16.806 01:45:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 518fb87a-d211-4f5e-87f3-04cf32adb04e 00:19:16.806 01:45:00 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:16.806 01:45:00 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:16.806 01:45:00 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=518fb87a-d211-4f5e-87f3-04cf32adb04e 00:19:16.806 01:45:00 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:16.806 01:45:00 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 518fb87a-d211-4f5e-87f3-04cf32adb04e 00:19:16.806 01:45:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=518fb87a-d211-4f5e-87f3-04cf32adb04e 00:19:16.806 01:45:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:16.806 01:45:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:16.806 01:45:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:16.806 01:45:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 518fb87a-d211-4f5e-87f3-04cf32adb04e 00:19:17.068 01:45:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:17.068 { 00:19:17.068 "name": "518fb87a-d211-4f5e-87f3-04cf32adb04e", 00:19:17.068 "aliases": [ 00:19:17.068 "lvs/nvme0n1p0" 00:19:17.068 ], 00:19:17.068 "product_name": "Logical Volume", 00:19:17.068 "block_size": 4096, 00:19:17.068 "num_blocks": 26476544, 00:19:17.068 "uuid": "518fb87a-d211-4f5e-87f3-04cf32adb04e", 00:19:17.068 "assigned_rate_limits": { 00:19:17.068 "rw_ios_per_sec": 0, 00:19:17.068 "rw_mbytes_per_sec": 0, 00:19:17.068 "r_mbytes_per_sec": 0, 00:19:17.068 "w_mbytes_per_sec": 0 00:19:17.068 }, 00:19:17.068 "claimed": false, 00:19:17.068 "zoned": false, 00:19:17.068 "supported_io_types": { 00:19:17.068 "read": true, 00:19:17.068 "write": true, 00:19:17.068 "unmap": true, 00:19:17.068 "flush": false, 00:19:17.068 "reset": true, 00:19:17.068 "nvme_admin": false, 00:19:17.068 "nvme_io": false, 00:19:17.068 "nvme_io_md": false, 00:19:17.068 "write_zeroes": true, 00:19:17.068 "zcopy": false, 00:19:17.068 "get_zone_info": false, 00:19:17.068 "zone_management": false, 00:19:17.068 "zone_append": false, 00:19:17.068 "compare": false, 00:19:17.068 "compare_and_write": false, 00:19:17.068 "abort": false, 00:19:17.068 "seek_hole": true, 00:19:17.068 "seek_data": true, 00:19:17.068 "copy": false, 00:19:17.068 "nvme_iov_md": false 00:19:17.068 }, 00:19:17.068 "driver_specific": { 00:19:17.068 "lvol": { 00:19:17.068 "lvol_store_uuid": "e6e903d9-e74e-41de-b71f-d1f675296a1e", 00:19:17.068 "base_bdev": "nvme0n1", 00:19:17.068 "thin_provision": true, 00:19:17.068 "num_allocated_clusters": 0, 00:19:17.068 "snapshot": false, 00:19:17.068 "clone": false, 00:19:17.068 "esnap_clone": false 00:19:17.068 } 00:19:17.068 } 00:19:17.068 } 00:19:17.068 ]' 00:19:17.068 01:45:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:17.068 01:45:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:17.068 01:45:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:17.068 01:45:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:17.068 01:45:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:17.068 01:45:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:17.068 01:45:00 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:17.068 01:45:00 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:17.068 01:45:00 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:17.330 01:45:01 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:17.330 01:45:01 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:17.330 01:45:01 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 518fb87a-d211-4f5e-87f3-04cf32adb04e 00:19:17.330 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=518fb87a-d211-4f5e-87f3-04cf32adb04e 00:19:17.330 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:17.330 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:17.330 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:17.330 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 518fb87a-d211-4f5e-87f3-04cf32adb04e 00:19:17.592 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:17.592 { 00:19:17.592 "name": "518fb87a-d211-4f5e-87f3-04cf32adb04e", 00:19:17.592 "aliases": [ 00:19:17.592 "lvs/nvme0n1p0" 00:19:17.592 ], 00:19:17.592 "product_name": "Logical Volume", 00:19:17.592 "block_size": 4096, 00:19:17.592 "num_blocks": 26476544, 00:19:17.592 "uuid": "518fb87a-d211-4f5e-87f3-04cf32adb04e", 00:19:17.592 "assigned_rate_limits": { 00:19:17.592 "rw_ios_per_sec": 0, 00:19:17.592 "rw_mbytes_per_sec": 0, 00:19:17.592 "r_mbytes_per_sec": 0, 00:19:17.592 "w_mbytes_per_sec": 0 00:19:17.592 }, 00:19:17.592 "claimed": false, 00:19:17.592 "zoned": false, 00:19:17.592 "supported_io_types": { 00:19:17.592 "read": true, 00:19:17.592 "write": true, 00:19:17.592 "unmap": true, 00:19:17.592 "flush": false, 00:19:17.592 "reset": true, 00:19:17.592 "nvme_admin": false, 00:19:17.592 "nvme_io": false, 00:19:17.592 "nvme_io_md": false, 00:19:17.592 "write_zeroes": true, 00:19:17.592 "zcopy": false, 00:19:17.592 "get_zone_info": false, 00:19:17.592 "zone_management": false, 00:19:17.592 "zone_append": false, 00:19:17.592 "compare": false, 00:19:17.592 "compare_and_write": false, 00:19:17.592 "abort": false, 00:19:17.592 "seek_hole": true, 00:19:17.592 "seek_data": true, 00:19:17.592 "copy": false, 00:19:17.592 "nvme_iov_md": false 00:19:17.592 }, 00:19:17.592 "driver_specific": { 00:19:17.592 "lvol": { 00:19:17.592 "lvol_store_uuid": "e6e903d9-e74e-41de-b71f-d1f675296a1e", 00:19:17.592 "base_bdev": "nvme0n1", 00:19:17.592 "thin_provision": true, 00:19:17.592 "num_allocated_clusters": 0, 00:19:17.592 "snapshot": false, 00:19:17.592 "clone": false, 00:19:17.592 "esnap_clone": false 00:19:17.592 } 00:19:17.592 } 00:19:17.592 } 00:19:17.592 ]' 00:19:17.592 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:17.592 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:17.592 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:17.592 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:17.592 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:17.592 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:17.592 01:45:01 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:17.592 01:45:01 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:17.853 01:45:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:17.853 01:45:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 518fb87a-d211-4f5e-87f3-04cf32adb04e 00:19:17.853 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=518fb87a-d211-4f5e-87f3-04cf32adb04e 00:19:17.853 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:17.853 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:17.853 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:17.853 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 518fb87a-d211-4f5e-87f3-04cf32adb04e 00:19:17.853 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:17.853 { 00:19:17.853 "name": "518fb87a-d211-4f5e-87f3-04cf32adb04e", 00:19:17.853 "aliases": [ 00:19:17.853 "lvs/nvme0n1p0" 00:19:17.853 ], 00:19:17.853 "product_name": "Logical Volume", 00:19:17.853 "block_size": 4096, 00:19:17.853 "num_blocks": 26476544, 00:19:17.853 "uuid": "518fb87a-d211-4f5e-87f3-04cf32adb04e", 00:19:17.853 "assigned_rate_limits": { 00:19:17.853 "rw_ios_per_sec": 0, 00:19:17.853 "rw_mbytes_per_sec": 0, 00:19:17.853 "r_mbytes_per_sec": 0, 00:19:17.853 "w_mbytes_per_sec": 0 00:19:17.853 }, 00:19:17.853 "claimed": false, 00:19:17.853 "zoned": false, 00:19:17.853 "supported_io_types": { 00:19:17.853 "read": true, 00:19:17.853 "write": true, 00:19:17.853 "unmap": true, 00:19:17.853 "flush": false, 00:19:17.853 "reset": true, 00:19:17.853 "nvme_admin": false, 00:19:17.853 "nvme_io": false, 00:19:17.853 "nvme_io_md": false, 00:19:17.853 "write_zeroes": true, 00:19:17.853 "zcopy": false, 00:19:17.853 "get_zone_info": false, 00:19:17.853 "zone_management": false, 00:19:17.853 "zone_append": false, 00:19:17.853 "compare": false, 00:19:17.853 "compare_and_write": false, 00:19:17.853 "abort": false, 00:19:17.853 "seek_hole": true, 00:19:17.853 "seek_data": true, 00:19:17.853 "copy": false, 00:19:17.853 "nvme_iov_md": false 00:19:17.853 }, 00:19:17.853 "driver_specific": { 00:19:17.853 "lvol": { 00:19:17.853 "lvol_store_uuid": "e6e903d9-e74e-41de-b71f-d1f675296a1e", 00:19:17.853 "base_bdev": "nvme0n1", 00:19:17.853 "thin_provision": true, 00:19:17.853 "num_allocated_clusters": 0, 00:19:17.853 "snapshot": false, 00:19:17.853 "clone": false, 00:19:17.853 "esnap_clone": false 00:19:17.853 } 00:19:17.853 } 00:19:17.853 } 00:19:17.853 ]' 00:19:17.853 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:17.853 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:17.853 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:18.115 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:18.115 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:18.115 01:45:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:18.115 01:45:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:18.115 01:45:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 518fb87a-d211-4f5e-87f3-04cf32adb04e -c nvc0n1p0 --l2p_dram_limit 20 00:19:18.115 [2024-11-21 01:45:02.000768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.115 [2024-11-21 01:45:02.000808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:18.115 [2024-11-21 01:45:02.000819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:18.115 [2024-11-21 01:45:02.000826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.115 [2024-11-21 01:45:02.000864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.115 [2024-11-21 01:45:02.000874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:18.115 [2024-11-21 01:45:02.000881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:18.115 [2024-11-21 01:45:02.000888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.115 [2024-11-21 01:45:02.000900] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:18.115 [2024-11-21 01:45:02.001481] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:18.115 [2024-11-21 01:45:02.001495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.115 [2024-11-21 01:45:02.001502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:18.115 [2024-11-21 01:45:02.001509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:19:18.115 [2024-11-21 01:45:02.001516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.115 [2024-11-21 01:45:02.001537] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8287017d-b0f5-48e5-bf5f-d69ad06d1c8f 00:19:18.116 [2024-11-21 01:45:02.002493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.116 [2024-11-21 01:45:02.002596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:18.116 [2024-11-21 01:45:02.002620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:18.116 [2024-11-21 01:45:02.002630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.116 [2024-11-21 01:45:02.007489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.116 [2024-11-21 01:45:02.007515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:18.116 [2024-11-21 01:45:02.007524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.802 ms 00:19:18.116 [2024-11-21 01:45:02.007530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.116 [2024-11-21 01:45:02.007598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.116 [2024-11-21 01:45:02.007606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:18.116 [2024-11-21 01:45:02.007625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:18.116 [2024-11-21 01:45:02.007632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.116 [2024-11-21 01:45:02.007666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.116 [2024-11-21 01:45:02.007674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:18.116 [2024-11-21 01:45:02.007682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:18.116 [2024-11-21 01:45:02.007687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.116 [2024-11-21 01:45:02.007704] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:18.116 [2024-11-21 01:45:02.010568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.116 [2024-11-21 01:45:02.010595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:18.116 [2024-11-21 01:45:02.010603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.870 ms 00:19:18.116 [2024-11-21 01:45:02.010616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.116 [2024-11-21 01:45:02.010641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.116 [2024-11-21 01:45:02.010648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:18.116 [2024-11-21 01:45:02.010655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:18.116 [2024-11-21 01:45:02.010662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.116 [2024-11-21 01:45:02.010679] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:18.116 [2024-11-21 01:45:02.010788] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:18.116 [2024-11-21 01:45:02.010797] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:18.116 [2024-11-21 01:45:02.010807] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:18.116 [2024-11-21 01:45:02.010815] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:18.116 [2024-11-21 01:45:02.010823] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:18.116 [2024-11-21 01:45:02.010828] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:18.116 [2024-11-21 01:45:02.010835] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:18.116 [2024-11-21 01:45:02.010841] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:18.116 [2024-11-21 01:45:02.010848] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:18.116 [2024-11-21 01:45:02.010854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.116 [2024-11-21 01:45:02.010864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:18.116 [2024-11-21 01:45:02.010870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:19:18.116 [2024-11-21 01:45:02.010876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.116 [2024-11-21 01:45:02.010937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.116 [2024-11-21 01:45:02.010944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:18.116 [2024-11-21 01:45:02.010950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:18.116 [2024-11-21 01:45:02.010958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.116 [2024-11-21 01:45:02.011026] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:18.116 [2024-11-21 01:45:02.011034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:18.116 [2024-11-21 01:45:02.011042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:18.116 [2024-11-21 01:45:02.011049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.116 [2024-11-21 01:45:02.011055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:18.116 [2024-11-21 01:45:02.011061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:18.116 [2024-11-21 01:45:02.011066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:18.116 [2024-11-21 01:45:02.011073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:18.116 [2024-11-21 01:45:02.011078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:18.116 [2024-11-21 01:45:02.011084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:18.116 [2024-11-21 01:45:02.011089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:18.116 [2024-11-21 01:45:02.011097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:18.116 [2024-11-21 01:45:02.011102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:18.116 [2024-11-21 01:45:02.011113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:18.116 [2024-11-21 01:45:02.011119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:18.116 [2024-11-21 01:45:02.011129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.116 [2024-11-21 01:45:02.011135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:18.116 [2024-11-21 01:45:02.011141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:18.116 [2024-11-21 01:45:02.011146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.116 [2024-11-21 01:45:02.011153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:18.116 [2024-11-21 01:45:02.011158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:18.116 [2024-11-21 01:45:02.011164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:18.116 [2024-11-21 01:45:02.011169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:18.116 [2024-11-21 01:45:02.011175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:18.116 [2024-11-21 01:45:02.011180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:18.116 [2024-11-21 01:45:02.011187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:18.116 [2024-11-21 01:45:02.011192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:18.116 [2024-11-21 01:45:02.011198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:18.116 [2024-11-21 01:45:02.011203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:18.116 [2024-11-21 01:45:02.011210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:18.116 [2024-11-21 01:45:02.011214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:18.116 [2024-11-21 01:45:02.011223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:18.116 [2024-11-21 01:45:02.011228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:18.116 [2024-11-21 01:45:02.011234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:18.116 [2024-11-21 01:45:02.011239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:18.116 [2024-11-21 01:45:02.011246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:18.116 [2024-11-21 01:45:02.011251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:18.116 [2024-11-21 01:45:02.011257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:18.116 [2024-11-21 01:45:02.011262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:18.116 [2024-11-21 01:45:02.011268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.116 [2024-11-21 01:45:02.011273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:18.116 [2024-11-21 01:45:02.011279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:18.116 [2024-11-21 01:45:02.011284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.116 [2024-11-21 01:45:02.011291] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:18.116 [2024-11-21 01:45:02.011297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:18.116 [2024-11-21 01:45:02.011304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:18.116 [2024-11-21 01:45:02.011309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.116 [2024-11-21 01:45:02.011318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:18.116 [2024-11-21 01:45:02.011324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:18.116 [2024-11-21 01:45:02.011381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:18.116 [2024-11-21 01:45:02.011387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:18.116 [2024-11-21 01:45:02.011394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:18.116 [2024-11-21 01:45:02.011399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:18.116 [2024-11-21 01:45:02.011408] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:18.116 [2024-11-21 01:45:02.011415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:18.116 [2024-11-21 01:45:02.011423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:18.116 [2024-11-21 01:45:02.011428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:18.116 [2024-11-21 01:45:02.011435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:18.116 [2024-11-21 01:45:02.011440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:18.116 [2024-11-21 01:45:02.011447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:18.117 [2024-11-21 01:45:02.011452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:18.117 [2024-11-21 01:45:02.011459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:18.117 [2024-11-21 01:45:02.011464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:18.117 [2024-11-21 01:45:02.011472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:18.117 [2024-11-21 01:45:02.011477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:18.117 [2024-11-21 01:45:02.011484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:18.117 [2024-11-21 01:45:02.011489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:18.117 [2024-11-21 01:45:02.011497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:18.117 [2024-11-21 01:45:02.011503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:18.117 [2024-11-21 01:45:02.011509] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:18.117 [2024-11-21 01:45:02.011515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:18.117 [2024-11-21 01:45:02.011523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:18.117 [2024-11-21 01:45:02.011528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:18.117 [2024-11-21 01:45:02.011535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:18.117 [2024-11-21 01:45:02.011541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:18.117 [2024-11-21 01:45:02.011548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.117 [2024-11-21 01:45:02.011555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:18.117 [2024-11-21 01:45:02.011562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:19:18.117 [2024-11-21 01:45:02.011567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.117 [2024-11-21 01:45:02.011605] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:18.117 [2024-11-21 01:45:02.011630] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:21.421 [2024-11-21 01:45:05.160243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.421 [2024-11-21 01:45:05.160307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:21.421 [2024-11-21 01:45:05.160328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3148.618 ms 00:19:21.421 [2024-11-21 01:45:05.160336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.421 [2024-11-21 01:45:05.186313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.421 [2024-11-21 01:45:05.186353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:21.421 [2024-11-21 01:45:05.186367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.786 ms 00:19:21.421 [2024-11-21 01:45:05.186375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.421 [2024-11-21 01:45:05.186492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.421 [2024-11-21 01:45:05.186502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:21.421 [2024-11-21 01:45:05.186514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:21.421 [2024-11-21 01:45:05.186522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.421 [2024-11-21 01:45:05.224958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.421 [2024-11-21 01:45:05.225001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:21.421 [2024-11-21 01:45:05.225017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.388 ms 00:19:21.421 [2024-11-21 01:45:05.225025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.421 [2024-11-21 01:45:05.225057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.421 [2024-11-21 01:45:05.225069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:21.421 [2024-11-21 01:45:05.225078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:21.421 [2024-11-21 01:45:05.225086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.421 [2024-11-21 01:45:05.225475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.421 [2024-11-21 01:45:05.225492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:21.421 [2024-11-21 01:45:05.225503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:19:21.421 [2024-11-21 01:45:05.225510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.421 [2024-11-21 01:45:05.225640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.421 [2024-11-21 01:45:05.225650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:21.421 [2024-11-21 01:45:05.225662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:19:21.421 [2024-11-21 01:45:05.225670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.421 [2024-11-21 01:45:05.238858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.421 [2024-11-21 01:45:05.238888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:21.421 [2024-11-21 01:45:05.238900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.172 ms 00:19:21.421 [2024-11-21 01:45:05.238907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.421 [2024-11-21 01:45:05.250400] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:21.421 [2024-11-21 01:45:05.255873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.421 [2024-11-21 01:45:05.256019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:21.421 [2024-11-21 01:45:05.256035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.903 ms 00:19:21.421 [2024-11-21 01:45:05.256045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.421 [2024-11-21 01:45:05.335698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.421 [2024-11-21 01:45:05.335748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:21.421 [2024-11-21 01:45:05.335761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.632 ms 00:19:21.421 [2024-11-21 01:45:05.335771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.421 [2024-11-21 01:45:05.335948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.421 [2024-11-21 01:45:05.335963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:21.421 [2024-11-21 01:45:05.335972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:19:21.421 [2024-11-21 01:45:05.335981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.421 [2024-11-21 01:45:05.359687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.421 [2024-11-21 01:45:05.359729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:21.421 [2024-11-21 01:45:05.359740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.675 ms 00:19:21.421 [2024-11-21 01:45:05.359751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.681 [2024-11-21 01:45:05.382795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.681 [2024-11-21 01:45:05.382943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:21.681 [2024-11-21 01:45:05.382960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.010 ms 00:19:21.681 [2024-11-21 01:45:05.382969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.681 [2024-11-21 01:45:05.383530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.681 [2024-11-21 01:45:05.383549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:21.681 [2024-11-21 01:45:05.383558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:19:21.681 [2024-11-21 01:45:05.383567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.681 [2024-11-21 01:45:05.460635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.681 [2024-11-21 01:45:05.460682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:21.681 [2024-11-21 01:45:05.460695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.038 ms 00:19:21.681 [2024-11-21 01:45:05.460705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.681 [2024-11-21 01:45:05.486950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.681 [2024-11-21 01:45:05.487002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:21.681 [2024-11-21 01:45:05.487015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.175 ms 00:19:21.681 [2024-11-21 01:45:05.487028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.681 [2024-11-21 01:45:05.511225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.681 [2024-11-21 01:45:05.511260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:21.681 [2024-11-21 01:45:05.511270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.157 ms 00:19:21.681 [2024-11-21 01:45:05.511279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.681 [2024-11-21 01:45:05.535354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.681 [2024-11-21 01:45:05.535389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:21.681 [2024-11-21 01:45:05.535399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.044 ms 00:19:21.681 [2024-11-21 01:45:05.535408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.681 [2024-11-21 01:45:05.535442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.681 [2024-11-21 01:45:05.535455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:21.681 [2024-11-21 01:45:05.535463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:21.681 [2024-11-21 01:45:05.535473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.681 [2024-11-21 01:45:05.535545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.681 [2024-11-21 01:45:05.535557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:21.681 [2024-11-21 01:45:05.535565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:21.681 [2024-11-21 01:45:05.535575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.681 [2024-11-21 01:45:05.536390] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3535.208 ms, result 0 00:19:21.681 { 00:19:21.681 "name": "ftl0", 00:19:21.681 "uuid": "8287017d-b0f5-48e5-bf5f-d69ad06d1c8f" 00:19:21.681 } 00:19:21.681 01:45:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:21.681 01:45:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:21.681 01:45:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:21.941 01:45:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:21.941 [2024-11-21 01:45:05.860815] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:21.941 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:21.941 Zero copy mechanism will not be used. 00:19:21.941 Running I/O for 4 seconds... 00:19:24.266 1131.00 IOPS, 75.11 MiB/s [2024-11-21T01:45:09.164Z] 1130.50 IOPS, 75.07 MiB/s [2024-11-21T01:45:10.105Z] 1184.67 IOPS, 78.67 MiB/s [2024-11-21T01:45:10.105Z] 1186.75 IOPS, 78.81 MiB/s 00:19:26.148 Latency(us) 00:19:26.148 [2024-11-21T01:45:10.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.148 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:26.148 ftl0 : 4.00 1186.19 78.77 0.00 0.00 888.40 159.90 3327.21 00:19:26.148 [2024-11-21T01:45:10.105Z] =================================================================================================================== 00:19:26.148 [2024-11-21T01:45:10.105Z] Total : 1186.19 78.77 0.00 0.00 888.40 159.90 3327.21 00:19:26.148 { 00:19:26.148 "results": [ 00:19:26.148 { 00:19:26.148 "job": "ftl0", 00:19:26.148 "core_mask": "0x1", 00:19:26.148 "workload": "randwrite", 00:19:26.148 "status": "finished", 00:19:26.148 "queue_depth": 1, 00:19:26.148 "io_size": 69632, 00:19:26.148 "runtime": 4.002744, 00:19:26.148 "iops": 1186.1862762145167, 00:19:26.148 "mibps": 78.77018240487025, 00:19:26.148 "io_failed": 0, 00:19:26.148 "io_timeout": 0, 00:19:26.148 "avg_latency_us": 888.4013738578187, 00:19:26.148 "min_latency_us": 159.90153846153845, 00:19:26.148 "max_latency_us": 3327.2123076923076 00:19:26.148 } 00:19:26.148 ], 00:19:26.148 "core_count": 1 00:19:26.148 } 00:19:26.148 [2024-11-21 01:45:09.872178] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:26.148 01:45:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:26.148 [2024-11-21 01:45:09.988426] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:26.148 Running I/O for 4 seconds... 00:19:28.477 6737.00 IOPS, 26.32 MiB/s [2024-11-21T01:45:13.009Z] 5934.50 IOPS, 23.18 MiB/s [2024-11-21T01:45:14.398Z] 5653.00 IOPS, 22.08 MiB/s [2024-11-21T01:45:14.398Z] 5482.25 IOPS, 21.42 MiB/s 00:19:30.441 Latency(us) 00:19:30.441 [2024-11-21T01:45:14.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.441 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:30.441 ftl0 : 4.03 5469.89 21.37 0.00 0.00 23306.87 281.99 43959.53 00:19:30.441 [2024-11-21T01:45:14.398Z] =================================================================================================================== 00:19:30.441 [2024-11-21T01:45:14.398Z] Total : 5469.89 21.37 0.00 0.00 23306.87 0.00 43959.53 00:19:30.441 [2024-11-21 01:45:14.030634] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:30.441 { 00:19:30.441 "results": [ 00:19:30.441 { 00:19:30.441 "job": "ftl0", 00:19:30.441 "core_mask": "0x1", 00:19:30.441 "workload": "randwrite", 00:19:30.441 "status": "finished", 00:19:30.441 "queue_depth": 128, 00:19:30.441 "io_size": 4096, 00:19:30.441 "runtime": 4.032443, 00:19:30.441 "iops": 5469.885129188435, 00:19:30.441 "mibps": 21.366738785892323, 00:19:30.441 "io_failed": 0, 00:19:30.441 "io_timeout": 0, 00:19:30.441 "avg_latency_us": 23306.865489902037, 00:19:30.441 "min_latency_us": 281.99384615384616, 00:19:30.441 "max_latency_us": 43959.53230769231 00:19:30.441 } 00:19:30.441 ], 00:19:30.441 "core_count": 1 00:19:30.441 } 00:19:30.441 01:45:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:30.441 [2024-11-21 01:45:14.147118] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:30.441 Running I/O for 4 seconds... 00:19:32.391 4554.00 IOPS, 17.79 MiB/s [2024-11-21T01:45:17.316Z] 4550.50 IOPS, 17.78 MiB/s [2024-11-21T01:45:18.270Z] 4937.33 IOPS, 19.29 MiB/s [2024-11-21T01:45:18.270Z] 4866.25 IOPS, 19.01 MiB/s 00:19:34.313 Latency(us) 00:19:34.313 [2024-11-21T01:45:18.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.313 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:34.313 Verification LBA range: start 0x0 length 0x1400000 00:19:34.313 ftl0 : 4.02 4877.28 19.05 0.00 0.00 26161.23 231.58 47589.22 00:19:34.313 [2024-11-21T01:45:18.270Z] =================================================================================================================== 00:19:34.313 [2024-11-21T01:45:18.270Z] Total : 4877.28 19.05 0.00 0.00 26161.23 0.00 47589.22 00:19:34.313 [2024-11-21 01:45:18.181297] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:19:34.313 "results": [ 00:19:34.313 { 00:19:34.313 "job": "ftl0", 00:19:34.313 "core_mask": "0x1", 00:19:34.313 "workload": "verify", 00:19:34.313 "status": "finished", 00:19:34.313 "verify_range": { 00:19:34.313 "start": 0, 00:19:34.313 "length": 20971520 00:19:34.313 }, 00:19:34.313 "queue_depth": 128, 00:19:34.313 "io_size": 4096, 00:19:34.313 "runtime": 4.017202, 00:19:34.313 "iops": 4877.275277668387, 00:19:34.313 "mibps": 19.05185655339214, 00:19:34.313 "io_failed": 0, 00:19:34.313 "io_timeout": 0, 00:19:34.313 "avg_latency_us": 26161.234608592553, 00:19:34.313 "min_latency_us": 231.58153846153846, 00:19:34.313 "max_latency_us": 47589.21846153846 00:19:34.313 } 00:19:34.313 ], 00:19:34.313 "core_count": 1 00:19:34.313 } 00:19:34.313 l0 00:19:34.313 01:45:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:34.574 [2024-11-21 01:45:18.396764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.574 [2024-11-21 01:45:18.396827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:34.574 [2024-11-21 01:45:18.396845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:34.574 [2024-11-21 01:45:18.396857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.574 [2024-11-21 01:45:18.396880] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:34.574 [2024-11-21 01:45:18.399964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.574 [2024-11-21 01:45:18.400155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:34.574 [2024-11-21 01:45:18.400184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.062 ms 00:19:34.574 [2024-11-21 01:45:18.400193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.574 [2024-11-21 01:45:18.403081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.574 [2024-11-21 01:45:18.403127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:34.574 [2024-11-21 01:45:18.403146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.849 ms 00:19:34.574 [2024-11-21 01:45:18.403162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.838 [2024-11-21 01:45:18.622999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.838 [2024-11-21 01:45:18.623204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:34.838 [2024-11-21 01:45:18.623285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 219.804 ms 00:19:34.838 [2024-11-21 01:45:18.623312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.838 [2024-11-21 01:45:18.629582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.838 [2024-11-21 01:45:18.629769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:34.838 [2024-11-21 01:45:18.629962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.211 ms 00:19:34.838 [2024-11-21 01:45:18.629984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.838 [2024-11-21 01:45:18.656676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.838 [2024-11-21 01:45:18.656935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:34.838 [2024-11-21 01:45:18.656963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.617 ms 00:19:34.838 [2024-11-21 01:45:18.656973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.838 [2024-11-21 01:45:18.674445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.838 [2024-11-21 01:45:18.674495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:34.838 [2024-11-21 01:45:18.674517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.394 ms 00:19:34.838 [2024-11-21 01:45:18.674525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.838 [2024-11-21 01:45:18.674711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.838 [2024-11-21 01:45:18.674725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:34.838 [2024-11-21 01:45:18.674741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:19:34.838 [2024-11-21 01:45:18.674750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.838 [2024-11-21 01:45:18.700538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.838 [2024-11-21 01:45:18.700745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:34.838 [2024-11-21 01:45:18.700773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.766 ms 00:19:34.838 [2024-11-21 01:45:18.700781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.838 [2024-11-21 01:45:18.725869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.838 [2024-11-21 01:45:18.725916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:34.838 [2024-11-21 01:45:18.725931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.041 ms 00:19:34.838 [2024-11-21 01:45:18.725938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.838 [2024-11-21 01:45:18.750984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.838 [2024-11-21 01:45:18.751029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:34.838 [2024-11-21 01:45:18.751044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.993 ms 00:19:34.838 [2024-11-21 01:45:18.751051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.838 [2024-11-21 01:45:18.775909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.838 [2024-11-21 01:45:18.776092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:34.838 [2024-11-21 01:45:18.776120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.764 ms 00:19:34.838 [2024-11-21 01:45:18.776127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.838 [2024-11-21 01:45:18.776511] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:34.838 [2024-11-21 01:45:18.776585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.776673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.776696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.776719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.776737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.776758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.776775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.776809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.776826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.776847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.776865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.776885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.776902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.776929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.776947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.776971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.776988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:34.838 [2024-11-21 01:45:18.777535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.777564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.777588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.777672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.777697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.777736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.777770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.777799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.777822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.777852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.777876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.777904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.777927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.777954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.777977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.778993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.779025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.779049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.779076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.779099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.779130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.779154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.779180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:34.839 [2024-11-21 01:45:18.779229] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:34.839 [2024-11-21 01:45:18.779258] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8287017d-b0f5-48e5-bf5f-d69ad06d1c8f 00:19:34.839 [2024-11-21 01:45:18.779283] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:34.839 [2024-11-21 01:45:18.779309] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:34.839 [2024-11-21 01:45:18.779337] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:34.839 [2024-11-21 01:45:18.779365] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:34.839 [2024-11-21 01:45:18.779386] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:34.839 [2024-11-21 01:45:18.779415] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:34.839 [2024-11-21 01:45:18.779437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:34.839 [2024-11-21 01:45:18.779465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:34.839 [2024-11-21 01:45:18.779485] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:34.839 [2024-11-21 01:45:18.779514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.839 [2024-11-21 01:45:18.779540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:34.839 [2024-11-21 01:45:18.779572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.020 ms 00:19:34.839 [2024-11-21 01:45:18.779596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.102 [2024-11-21 01:45:18.799838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.102 [2024-11-21 01:45:18.799885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:35.102 [2024-11-21 01:45:18.799900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.415 ms 00:19:35.102 [2024-11-21 01:45:18.799909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.102 [2024-11-21 01:45:18.800301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.102 [2024-11-21 01:45:18.800311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:35.102 [2024-11-21 01:45:18.800322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:19:35.102 [2024-11-21 01:45:18.800330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.102 [2024-11-21 01:45:18.838994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.102 [2024-11-21 01:45:18.839173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.102 [2024-11-21 01:45:18.839202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.102 [2024-11-21 01:45:18.839210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.102 [2024-11-21 01:45:18.839283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.102 [2024-11-21 01:45:18.839291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.102 [2024-11-21 01:45:18.839301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.102 [2024-11-21 01:45:18.839309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.102 [2024-11-21 01:45:18.839407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.102 [2024-11-21 01:45:18.839421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.102 [2024-11-21 01:45:18.839432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.102 [2024-11-21 01:45:18.839440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.102 [2024-11-21 01:45:18.839458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.102 [2024-11-21 01:45:18.839466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.102 [2024-11-21 01:45:18.839476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.102 [2024-11-21 01:45:18.839484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.102 [2024-11-21 01:45:18.925217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.102 [2024-11-21 01:45:18.925280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.102 [2024-11-21 01:45:18.925298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.102 [2024-11-21 01:45:18.925307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.102 [2024-11-21 01:45:18.995442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.102 [2024-11-21 01:45:18.995495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:35.102 [2024-11-21 01:45:18.995511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.102 [2024-11-21 01:45:18.995520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.102 [2024-11-21 01:45:18.995603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.102 [2024-11-21 01:45:18.995634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:35.102 [2024-11-21 01:45:18.995649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.102 [2024-11-21 01:45:18.995658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.102 [2024-11-21 01:45:18.995727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.102 [2024-11-21 01:45:18.995737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:35.102 [2024-11-21 01:45:18.995748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.102 [2024-11-21 01:45:18.995780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.102 [2024-11-21 01:45:18.995881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.102 [2024-11-21 01:45:18.995892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:35.102 [2024-11-21 01:45:18.995907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.102 [2024-11-21 01:45:18.995915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.102 [2024-11-21 01:45:18.995954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.102 [2024-11-21 01:45:18.995964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:35.102 [2024-11-21 01:45:18.995974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.102 [2024-11-21 01:45:18.995982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.102 [2024-11-21 01:45:18.996024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.102 [2024-11-21 01:45:18.996033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:35.102 [2024-11-21 01:45:18.996044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.102 [2024-11-21 01:45:18.996055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.102 [2024-11-21 01:45:18.996108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:35.102 [2024-11-21 01:45:18.996125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:35.102 [2024-11-21 01:45:18.996136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:35.102 [2024-11-21 01:45:18.996144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.102 [2024-11-21 01:45:18.996291] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 599.473 ms, result 0 00:19:35.102 true 00:19:35.102 01:45:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75840 00:19:35.102 01:45:19 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 75840 ']' 00:19:35.102 01:45:19 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 75840 00:19:35.102 01:45:19 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:35.102 01:45:19 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.102 01:45:19 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75840 00:19:35.364 01:45:19 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:35.364 01:45:19 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:35.364 01:45:19 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75840' 00:19:35.364 killing process with pid 75840 00:19:35.364 01:45:19 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 75840 00:19:35.364 Received shutdown signal, test time was about 4.000000 seconds 00:19:35.364 00:19:35.364 Latency(us) 00:19:35.364 [2024-11-21T01:45:19.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.364 [2024-11-21T01:45:19.321Z] =================================================================================================================== 00:19:35.364 [2024-11-21T01:45:19.321Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.364 01:45:19 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 75840 00:19:40.657 Remove shared memory files 00:19:40.657 01:45:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:40.657 01:45:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:40.657 01:45:24 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:40.657 01:45:24 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:40.657 01:45:24 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:40.657 01:45:24 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:40.657 01:45:24 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:40.657 01:45:24 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:40.657 ************************************ 00:19:40.657 END TEST ftl_bdevperf 00:19:40.657 ************************************ 00:19:40.657 00:19:40.657 real 0m26.389s 00:19:40.657 user 0m28.909s 00:19:40.657 sys 0m0.947s 00:19:40.657 01:45:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:40.657 01:45:24 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:40.657 01:45:24 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:40.657 01:45:24 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:40.657 01:45:24 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:40.657 01:45:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:40.657 ************************************ 00:19:40.657 START TEST ftl_trim 00:19:40.657 ************************************ 00:19:40.657 01:45:24 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:40.919 * Looking for test storage... 00:19:40.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:40.919 01:45:24 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:40.919 01:45:24 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:19:40.919 01:45:24 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:40.919 01:45:24 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.919 01:45:24 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:40.919 01:45:24 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.919 01:45:24 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:40.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.919 --rc genhtml_branch_coverage=1 00:19:40.919 --rc genhtml_function_coverage=1 00:19:40.919 --rc genhtml_legend=1 00:19:40.919 --rc geninfo_all_blocks=1 00:19:40.919 --rc geninfo_unexecuted_blocks=1 00:19:40.919 00:19:40.919 ' 00:19:40.919 01:45:24 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:40.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.919 --rc genhtml_branch_coverage=1 00:19:40.919 --rc genhtml_function_coverage=1 00:19:40.919 --rc genhtml_legend=1 00:19:40.919 --rc geninfo_all_blocks=1 00:19:40.919 --rc geninfo_unexecuted_blocks=1 00:19:40.919 00:19:40.919 ' 00:19:40.919 01:45:24 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:40.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.919 --rc genhtml_branch_coverage=1 00:19:40.919 --rc genhtml_function_coverage=1 00:19:40.919 --rc genhtml_legend=1 00:19:40.919 --rc geninfo_all_blocks=1 00:19:40.919 --rc geninfo_unexecuted_blocks=1 00:19:40.919 00:19:40.919 ' 00:19:40.919 01:45:24 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:40.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.919 --rc genhtml_branch_coverage=1 00:19:40.919 --rc genhtml_function_coverage=1 00:19:40.919 --rc genhtml_legend=1 00:19:40.919 --rc geninfo_all_blocks=1 00:19:40.919 --rc geninfo_unexecuted_blocks=1 00:19:40.919 00:19:40.919 ' 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:40.919 01:45:24 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:40.920 01:45:24 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:40.920 01:45:24 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:40.920 01:45:24 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:40.920 01:45:24 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:40.920 01:45:24 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:40.920 01:45:24 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:40.920 01:45:24 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:40.920 01:45:24 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76182 00:19:40.920 01:45:24 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76182 00:19:40.920 01:45:24 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76182 ']' 00:19:40.920 01:45:24 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.920 01:45:24 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:40.920 01:45:24 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.920 01:45:24 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.920 01:45:24 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.920 01:45:24 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:40.920 [2024-11-21 01:45:24.843932] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:19:40.920 [2024-11-21 01:45:24.844802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76182 ] 00:19:41.181 [2024-11-21 01:45:25.017308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:41.443 [2024-11-21 01:45:25.140747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.443 [2024-11-21 01:45:25.141193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.443 [2024-11-21 01:45:25.141113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.016 01:45:25 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.016 01:45:25 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:42.016 01:45:25 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:42.016 01:45:25 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:42.016 01:45:25 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:42.016 01:45:25 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:42.016 01:45:25 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:42.016 01:45:25 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:42.277 01:45:26 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:42.277 01:45:26 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:42.277 01:45:26 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:42.277 01:45:26 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:42.277 01:45:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:42.277 01:45:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:42.277 01:45:26 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:42.277 01:45:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:42.539 01:45:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:42.539 { 00:19:42.539 "name": "nvme0n1", 00:19:42.539 "aliases": [ 00:19:42.539 "d96a219f-2870-47a2-8f8d-8546edf36119" 00:19:42.539 ], 00:19:42.539 "product_name": "NVMe disk", 00:19:42.539 "block_size": 4096, 00:19:42.539 "num_blocks": 1310720, 00:19:42.539 "uuid": "d96a219f-2870-47a2-8f8d-8546edf36119", 00:19:42.539 "numa_id": -1, 00:19:42.539 "assigned_rate_limits": { 00:19:42.539 "rw_ios_per_sec": 0, 00:19:42.539 "rw_mbytes_per_sec": 0, 00:19:42.539 "r_mbytes_per_sec": 0, 00:19:42.539 "w_mbytes_per_sec": 0 00:19:42.539 }, 00:19:42.539 "claimed": true, 00:19:42.539 "claim_type": "read_many_write_one", 00:19:42.539 "zoned": false, 00:19:42.539 "supported_io_types": { 00:19:42.539 "read": true, 00:19:42.539 "write": true, 00:19:42.539 "unmap": true, 00:19:42.539 "flush": true, 00:19:42.539 "reset": true, 00:19:42.539 "nvme_admin": true, 00:19:42.539 "nvme_io": true, 00:19:42.539 "nvme_io_md": false, 00:19:42.539 "write_zeroes": true, 00:19:42.539 "zcopy": false, 00:19:42.539 "get_zone_info": false, 00:19:42.539 "zone_management": false, 00:19:42.539 "zone_append": false, 00:19:42.539 "compare": true, 00:19:42.539 "compare_and_write": false, 00:19:42.539 "abort": true, 00:19:42.539 "seek_hole": false, 00:19:42.539 "seek_data": false, 00:19:42.539 "copy": true, 00:19:42.539 "nvme_iov_md": false 00:19:42.539 }, 00:19:42.539 "driver_specific": { 00:19:42.539 "nvme": [ 00:19:42.539 { 00:19:42.539 "pci_address": "0000:00:11.0", 00:19:42.539 "trid": { 00:19:42.539 "trtype": "PCIe", 00:19:42.539 "traddr": "0000:00:11.0" 00:19:42.539 }, 00:19:42.539 "ctrlr_data": { 00:19:42.539 "cntlid": 0, 00:19:42.539 "vendor_id": "0x1b36", 00:19:42.539 "model_number": "QEMU NVMe Ctrl", 00:19:42.539 "serial_number": "12341", 00:19:42.539 "firmware_revision": "8.0.0", 00:19:42.539 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:42.539 "oacs": { 00:19:42.539 "security": 0, 00:19:42.539 "format": 1, 00:19:42.539 "firmware": 0, 00:19:42.539 "ns_manage": 1 00:19:42.539 }, 00:19:42.539 "multi_ctrlr": false, 00:19:42.539 "ana_reporting": false 00:19:42.539 }, 00:19:42.540 "vs": { 00:19:42.540 "nvme_version": "1.4" 00:19:42.540 }, 00:19:42.540 "ns_data": { 00:19:42.540 "id": 1, 00:19:42.540 "can_share": false 00:19:42.540 } 00:19:42.540 } 00:19:42.540 ], 00:19:42.540 "mp_policy": "active_passive" 00:19:42.540 } 00:19:42.540 } 00:19:42.540 ]' 00:19:42.540 01:45:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:42.540 01:45:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:42.540 01:45:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:42.540 01:45:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:42.540 01:45:26 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:42.540 01:45:26 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:42.540 01:45:26 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:42.540 01:45:26 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:42.540 01:45:26 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:42.540 01:45:26 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:42.540 01:45:26 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:42.801 01:45:26 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=e6e903d9-e74e-41de-b71f-d1f675296a1e 00:19:42.801 01:45:26 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:42.801 01:45:26 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e6e903d9-e74e-41de-b71f-d1f675296a1e 00:19:43.062 01:45:26 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:43.323 01:45:27 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=529f01b7-ed5c-4cb7-a021-1818560a41dd 00:19:43.323 01:45:27 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 529f01b7-ed5c-4cb7-a021-1818560a41dd 00:19:43.583 01:45:27 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=2ce0c1e5-8e53-4412-890b-eb1215bf91ce 00:19:43.583 01:45:27 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2ce0c1e5-8e53-4412-890b-eb1215bf91ce 00:19:43.583 01:45:27 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:43.583 01:45:27 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:43.583 01:45:27 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=2ce0c1e5-8e53-4412-890b-eb1215bf91ce 00:19:43.583 01:45:27 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:43.583 01:45:27 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 2ce0c1e5-8e53-4412-890b-eb1215bf91ce 00:19:43.583 01:45:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=2ce0c1e5-8e53-4412-890b-eb1215bf91ce 00:19:43.583 01:45:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:43.583 01:45:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:43.583 01:45:27 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:43.583 01:45:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2ce0c1e5-8e53-4412-890b-eb1215bf91ce 00:19:43.583 01:45:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:43.583 { 00:19:43.583 "name": "2ce0c1e5-8e53-4412-890b-eb1215bf91ce", 00:19:43.583 "aliases": [ 00:19:43.583 "lvs/nvme0n1p0" 00:19:43.583 ], 00:19:43.583 "product_name": "Logical Volume", 00:19:43.583 "block_size": 4096, 00:19:43.583 "num_blocks": 26476544, 00:19:43.583 "uuid": "2ce0c1e5-8e53-4412-890b-eb1215bf91ce", 00:19:43.583 "assigned_rate_limits": { 00:19:43.583 "rw_ios_per_sec": 0, 00:19:43.583 "rw_mbytes_per_sec": 0, 00:19:43.583 "r_mbytes_per_sec": 0, 00:19:43.583 "w_mbytes_per_sec": 0 00:19:43.583 }, 00:19:43.583 "claimed": false, 00:19:43.583 "zoned": false, 00:19:43.583 "supported_io_types": { 00:19:43.583 "read": true, 00:19:43.583 "write": true, 00:19:43.583 "unmap": true, 00:19:43.583 "flush": false, 00:19:43.583 "reset": true, 00:19:43.583 "nvme_admin": false, 00:19:43.583 "nvme_io": false, 00:19:43.583 "nvme_io_md": false, 00:19:43.583 "write_zeroes": true, 00:19:43.583 "zcopy": false, 00:19:43.583 "get_zone_info": false, 00:19:43.583 "zone_management": false, 00:19:43.583 "zone_append": false, 00:19:43.583 "compare": false, 00:19:43.583 "compare_and_write": false, 00:19:43.583 "abort": false, 00:19:43.583 "seek_hole": true, 00:19:43.583 "seek_data": true, 00:19:43.583 "copy": false, 00:19:43.583 "nvme_iov_md": false 00:19:43.583 }, 00:19:43.583 "driver_specific": { 00:19:43.583 "lvol": { 00:19:43.583 "lvol_store_uuid": "529f01b7-ed5c-4cb7-a021-1818560a41dd", 00:19:43.583 "base_bdev": "nvme0n1", 00:19:43.583 "thin_provision": true, 00:19:43.583 "num_allocated_clusters": 0, 00:19:43.583 "snapshot": false, 00:19:43.583 "clone": false, 00:19:43.583 "esnap_clone": false 00:19:43.583 } 00:19:43.583 } 00:19:43.583 } 00:19:43.583 ]' 00:19:43.583 01:45:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:43.843 01:45:27 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:43.843 01:45:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:43.843 01:45:27 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:43.843 01:45:27 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:43.843 01:45:27 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:43.843 01:45:27 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:43.843 01:45:27 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:43.843 01:45:27 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:44.104 01:45:27 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:44.104 01:45:27 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:44.104 01:45:27 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 2ce0c1e5-8e53-4412-890b-eb1215bf91ce 00:19:44.104 01:45:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=2ce0c1e5-8e53-4412-890b-eb1215bf91ce 00:19:44.104 01:45:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:44.104 01:45:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:44.104 01:45:27 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:44.104 01:45:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2ce0c1e5-8e53-4412-890b-eb1215bf91ce 00:19:44.104 01:45:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:44.104 { 00:19:44.104 "name": "2ce0c1e5-8e53-4412-890b-eb1215bf91ce", 00:19:44.104 "aliases": [ 00:19:44.104 "lvs/nvme0n1p0" 00:19:44.104 ], 00:19:44.104 "product_name": "Logical Volume", 00:19:44.104 "block_size": 4096, 00:19:44.104 "num_blocks": 26476544, 00:19:44.104 "uuid": "2ce0c1e5-8e53-4412-890b-eb1215bf91ce", 00:19:44.104 "assigned_rate_limits": { 00:19:44.104 "rw_ios_per_sec": 0, 00:19:44.104 "rw_mbytes_per_sec": 0, 00:19:44.104 "r_mbytes_per_sec": 0, 00:19:44.104 "w_mbytes_per_sec": 0 00:19:44.104 }, 00:19:44.104 "claimed": false, 00:19:44.104 "zoned": false, 00:19:44.104 "supported_io_types": { 00:19:44.104 "read": true, 00:19:44.104 "write": true, 00:19:44.104 "unmap": true, 00:19:44.104 "flush": false, 00:19:44.104 "reset": true, 00:19:44.104 "nvme_admin": false, 00:19:44.104 "nvme_io": false, 00:19:44.104 "nvme_io_md": false, 00:19:44.104 "write_zeroes": true, 00:19:44.104 "zcopy": false, 00:19:44.104 "get_zone_info": false, 00:19:44.104 "zone_management": false, 00:19:44.104 "zone_append": false, 00:19:44.104 "compare": false, 00:19:44.104 "compare_and_write": false, 00:19:44.104 "abort": false, 00:19:44.104 "seek_hole": true, 00:19:44.104 "seek_data": true, 00:19:44.104 "copy": false, 00:19:44.104 "nvme_iov_md": false 00:19:44.104 }, 00:19:44.104 "driver_specific": { 00:19:44.104 "lvol": { 00:19:44.104 "lvol_store_uuid": "529f01b7-ed5c-4cb7-a021-1818560a41dd", 00:19:44.104 "base_bdev": "nvme0n1", 00:19:44.104 "thin_provision": true, 00:19:44.104 "num_allocated_clusters": 0, 00:19:44.104 "snapshot": false, 00:19:44.104 "clone": false, 00:19:44.104 "esnap_clone": false 00:19:44.104 } 00:19:44.104 } 00:19:44.104 } 00:19:44.104 ]' 00:19:44.104 01:45:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:44.365 01:45:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:44.365 01:45:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:44.365 01:45:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:44.365 01:45:28 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:44.365 01:45:28 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:44.365 01:45:28 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:44.365 01:45:28 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:44.365 01:45:28 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:44.365 01:45:28 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:44.365 01:45:28 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 2ce0c1e5-8e53-4412-890b-eb1215bf91ce 00:19:44.365 01:45:28 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=2ce0c1e5-8e53-4412-890b-eb1215bf91ce 00:19:44.365 01:45:28 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:44.365 01:45:28 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:44.365 01:45:28 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:44.365 01:45:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2ce0c1e5-8e53-4412-890b-eb1215bf91ce 00:19:44.625 01:45:28 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:44.625 { 00:19:44.625 "name": "2ce0c1e5-8e53-4412-890b-eb1215bf91ce", 00:19:44.625 "aliases": [ 00:19:44.625 "lvs/nvme0n1p0" 00:19:44.625 ], 00:19:44.625 "product_name": "Logical Volume", 00:19:44.625 "block_size": 4096, 00:19:44.625 "num_blocks": 26476544, 00:19:44.625 "uuid": "2ce0c1e5-8e53-4412-890b-eb1215bf91ce", 00:19:44.625 "assigned_rate_limits": { 00:19:44.625 "rw_ios_per_sec": 0, 00:19:44.625 "rw_mbytes_per_sec": 0, 00:19:44.625 "r_mbytes_per_sec": 0, 00:19:44.625 "w_mbytes_per_sec": 0 00:19:44.625 }, 00:19:44.625 "claimed": false, 00:19:44.625 "zoned": false, 00:19:44.625 "supported_io_types": { 00:19:44.625 "read": true, 00:19:44.625 "write": true, 00:19:44.625 "unmap": true, 00:19:44.625 "flush": false, 00:19:44.625 "reset": true, 00:19:44.625 "nvme_admin": false, 00:19:44.625 "nvme_io": false, 00:19:44.625 "nvme_io_md": false, 00:19:44.625 "write_zeroes": true, 00:19:44.625 "zcopy": false, 00:19:44.625 "get_zone_info": false, 00:19:44.625 "zone_management": false, 00:19:44.625 "zone_append": false, 00:19:44.625 "compare": false, 00:19:44.625 "compare_and_write": false, 00:19:44.625 "abort": false, 00:19:44.625 "seek_hole": true, 00:19:44.625 "seek_data": true, 00:19:44.625 "copy": false, 00:19:44.625 "nvme_iov_md": false 00:19:44.625 }, 00:19:44.625 "driver_specific": { 00:19:44.625 "lvol": { 00:19:44.625 "lvol_store_uuid": "529f01b7-ed5c-4cb7-a021-1818560a41dd", 00:19:44.625 "base_bdev": "nvme0n1", 00:19:44.625 "thin_provision": true, 00:19:44.625 "num_allocated_clusters": 0, 00:19:44.625 "snapshot": false, 00:19:44.625 "clone": false, 00:19:44.625 "esnap_clone": false 00:19:44.625 } 00:19:44.625 } 00:19:44.625 } 00:19:44.625 ]' 00:19:44.625 01:45:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:44.625 01:45:28 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:44.625 01:45:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:44.625 01:45:28 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:44.625 01:45:28 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:44.625 01:45:28 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:44.625 01:45:28 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:44.625 01:45:28 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2ce0c1e5-8e53-4412-890b-eb1215bf91ce -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:44.887 [2024-11-21 01:45:28.747815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.887 [2024-11-21 01:45:28.747854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:44.887 [2024-11-21 01:45:28.747868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:44.887 [2024-11-21 01:45:28.747875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.887 [2024-11-21 01:45:28.750215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.887 [2024-11-21 01:45:28.750328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:44.887 [2024-11-21 01:45:28.750344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.310 ms 00:19:44.887 [2024-11-21 01:45:28.750352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.887 [2024-11-21 01:45:28.750433] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:44.887 [2024-11-21 01:45:28.750971] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:44.888 [2024-11-21 01:45:28.750992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.888 [2024-11-21 01:45:28.750998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:44.888 [2024-11-21 01:45:28.751007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:19:44.888 [2024-11-21 01:45:28.751014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.888 [2024-11-21 01:45:28.751110] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 45bfb092-704a-4e18-9717-d1701cdaabcd 00:19:44.888 [2024-11-21 01:45:28.752339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.888 [2024-11-21 01:45:28.752369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:44.888 [2024-11-21 01:45:28.752378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:44.888 [2024-11-21 01:45:28.752386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.888 [2024-11-21 01:45:28.759046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.888 [2024-11-21 01:45:28.759073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:44.888 [2024-11-21 01:45:28.759082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.584 ms 00:19:44.888 [2024-11-21 01:45:28.759092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.888 [2024-11-21 01:45:28.759191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.888 [2024-11-21 01:45:28.759202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:44.888 [2024-11-21 01:45:28.759208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:44.888 [2024-11-21 01:45:28.759218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.888 [2024-11-21 01:45:28.759250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.888 [2024-11-21 01:45:28.759260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:44.888 [2024-11-21 01:45:28.759266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:44.888 [2024-11-21 01:45:28.759273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.888 [2024-11-21 01:45:28.759307] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:44.888 [2024-11-21 01:45:28.762511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.888 [2024-11-21 01:45:28.762534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:44.888 [2024-11-21 01:45:28.762546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.205 ms 00:19:44.888 [2024-11-21 01:45:28.762553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.888 [2024-11-21 01:45:28.762597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.888 [2024-11-21 01:45:28.762604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:44.888 [2024-11-21 01:45:28.762628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:44.888 [2024-11-21 01:45:28.762647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.888 [2024-11-21 01:45:28.762679] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:44.888 [2024-11-21 01:45:28.762789] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:44.888 [2024-11-21 01:45:28.762804] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:44.888 [2024-11-21 01:45:28.762813] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:44.888 [2024-11-21 01:45:28.762823] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:44.888 [2024-11-21 01:45:28.762831] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:44.888 [2024-11-21 01:45:28.762839] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:44.888 [2024-11-21 01:45:28.762845] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:44.888 [2024-11-21 01:45:28.762852] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:44.888 [2024-11-21 01:45:28.762859] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:44.888 [2024-11-21 01:45:28.762866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.888 [2024-11-21 01:45:28.762872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:44.888 [2024-11-21 01:45:28.762881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:19:44.888 [2024-11-21 01:45:28.762886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.888 [2024-11-21 01:45:28.762965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.888 [2024-11-21 01:45:28.762972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:44.888 [2024-11-21 01:45:28.762981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:44.888 [2024-11-21 01:45:28.762987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.888 [2024-11-21 01:45:28.763086] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:44.888 [2024-11-21 01:45:28.763094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:44.888 [2024-11-21 01:45:28.763102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:44.888 [2024-11-21 01:45:28.763108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.888 [2024-11-21 01:45:28.763115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:44.888 [2024-11-21 01:45:28.763121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:44.888 [2024-11-21 01:45:28.763128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:44.888 [2024-11-21 01:45:28.763133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:44.888 [2024-11-21 01:45:28.763140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:44.888 [2024-11-21 01:45:28.763145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:44.888 [2024-11-21 01:45:28.763151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:44.888 [2024-11-21 01:45:28.763156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:44.888 [2024-11-21 01:45:28.763163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:44.888 [2024-11-21 01:45:28.763169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:44.888 [2024-11-21 01:45:28.763175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:44.888 [2024-11-21 01:45:28.763180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.888 [2024-11-21 01:45:28.763189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:44.888 [2024-11-21 01:45:28.763194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:44.888 [2024-11-21 01:45:28.763201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.888 [2024-11-21 01:45:28.763206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:44.888 [2024-11-21 01:45:28.763213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:44.888 [2024-11-21 01:45:28.763218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.888 [2024-11-21 01:45:28.763225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:44.888 [2024-11-21 01:45:28.763230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:44.888 [2024-11-21 01:45:28.763237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.888 [2024-11-21 01:45:28.763243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:44.888 [2024-11-21 01:45:28.763250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:44.888 [2024-11-21 01:45:28.763255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.888 [2024-11-21 01:45:28.763262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:44.888 [2024-11-21 01:45:28.763267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:44.888 [2024-11-21 01:45:28.763274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.888 [2024-11-21 01:45:28.763280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:44.888 [2024-11-21 01:45:28.763288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:44.888 [2024-11-21 01:45:28.763293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:44.888 [2024-11-21 01:45:28.763299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:44.888 [2024-11-21 01:45:28.763304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:44.888 [2024-11-21 01:45:28.763310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:44.888 [2024-11-21 01:45:28.763316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:44.888 [2024-11-21 01:45:28.763322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:44.888 [2024-11-21 01:45:28.763327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.888 [2024-11-21 01:45:28.763334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:44.888 [2024-11-21 01:45:28.763339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:44.888 [2024-11-21 01:45:28.763344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.888 [2024-11-21 01:45:28.763349] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:44.888 [2024-11-21 01:45:28.763357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:44.888 [2024-11-21 01:45:28.763363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:44.888 [2024-11-21 01:45:28.763369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.888 [2024-11-21 01:45:28.763375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:44.888 [2024-11-21 01:45:28.763384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:44.888 [2024-11-21 01:45:28.763389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:44.888 [2024-11-21 01:45:28.763396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:44.888 [2024-11-21 01:45:28.763401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:44.888 [2024-11-21 01:45:28.763407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:44.888 [2024-11-21 01:45:28.763416] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:44.888 [2024-11-21 01:45:28.763424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:44.888 [2024-11-21 01:45:28.763430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:44.889 [2024-11-21 01:45:28.763437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:44.889 [2024-11-21 01:45:28.763444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:44.889 [2024-11-21 01:45:28.763453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:44.889 [2024-11-21 01:45:28.763458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:44.889 [2024-11-21 01:45:28.763465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:44.889 [2024-11-21 01:45:28.763470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:44.889 [2024-11-21 01:45:28.763477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:44.889 [2024-11-21 01:45:28.763483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:44.889 [2024-11-21 01:45:28.763491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:44.889 [2024-11-21 01:45:28.763496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:44.889 [2024-11-21 01:45:28.763503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:44.889 [2024-11-21 01:45:28.763508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:44.889 [2024-11-21 01:45:28.763515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:44.889 [2024-11-21 01:45:28.763521] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:44.889 [2024-11-21 01:45:28.763533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:44.889 [2024-11-21 01:45:28.763539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:44.889 [2024-11-21 01:45:28.763546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:44.889 [2024-11-21 01:45:28.763552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:44.889 [2024-11-21 01:45:28.763559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:44.889 [2024-11-21 01:45:28.763566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.889 [2024-11-21 01:45:28.763573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:44.889 [2024-11-21 01:45:28.763579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:19:44.889 [2024-11-21 01:45:28.763586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.889 [2024-11-21 01:45:28.763681] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:44.889 [2024-11-21 01:45:28.763695] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:47.429 [2024-11-21 01:45:31.348494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.429 [2024-11-21 01:45:31.348565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:47.429 [2024-11-21 01:45:31.348581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2584.802 ms 00:19:47.429 [2024-11-21 01:45:31.348592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.429 [2024-11-21 01:45:31.376956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.429 [2024-11-21 01:45:31.377004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:47.429 [2024-11-21 01:45:31.377017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.086 ms 00:19:47.429 [2024-11-21 01:45:31.377027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.429 [2024-11-21 01:45:31.377166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.429 [2024-11-21 01:45:31.377179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:47.429 [2024-11-21 01:45:31.377188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:47.429 [2024-11-21 01:45:31.377200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.758 [2024-11-21 01:45:31.426817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.758 [2024-11-21 01:45:31.426877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:47.758 [2024-11-21 01:45:31.426893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.563 ms 00:19:47.758 [2024-11-21 01:45:31.426905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.758 [2024-11-21 01:45:31.427048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.758 [2024-11-21 01:45:31.427065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:47.758 [2024-11-21 01:45:31.427077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:47.758 [2024-11-21 01:45:31.427088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.758 [2024-11-21 01:45:31.427533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.758 [2024-11-21 01:45:31.427567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:47.758 [2024-11-21 01:45:31.427579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:19:47.758 [2024-11-21 01:45:31.427590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.758 [2024-11-21 01:45:31.427757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.758 [2024-11-21 01:45:31.427843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:47.758 [2024-11-21 01:45:31.427857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:19:47.758 [2024-11-21 01:45:31.427872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.758 [2024-11-21 01:45:31.444182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.758 [2024-11-21 01:45:31.444215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:47.758 [2024-11-21 01:45:31.444225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.257 ms 00:19:47.758 [2024-11-21 01:45:31.444236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.758 [2024-11-21 01:45:31.456545] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:47.758 [2024-11-21 01:45:31.473843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.758 [2024-11-21 01:45:31.473879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:47.758 [2024-11-21 01:45:31.473892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.485 ms 00:19:47.758 [2024-11-21 01:45:31.473900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.758 [2024-11-21 01:45:31.553626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.758 [2024-11-21 01:45:31.553673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:47.758 [2024-11-21 01:45:31.553688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.640 ms 00:19:47.758 [2024-11-21 01:45:31.553697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.758 [2024-11-21 01:45:31.553933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.758 [2024-11-21 01:45:31.553947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:47.758 [2024-11-21 01:45:31.553961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:19:47.758 [2024-11-21 01:45:31.553969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.758 [2024-11-21 01:45:31.577190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.758 [2024-11-21 01:45:31.577224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:47.758 [2024-11-21 01:45:31.577238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.188 ms 00:19:47.758 [2024-11-21 01:45:31.577247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.758 [2024-11-21 01:45:31.600021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.758 [2024-11-21 01:45:31.600050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:47.758 [2024-11-21 01:45:31.600064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.708 ms 00:19:47.758 [2024-11-21 01:45:31.600071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.758 [2024-11-21 01:45:31.600689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.758 [2024-11-21 01:45:31.600707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:47.758 [2024-11-21 01:45:31.600718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:19:47.758 [2024-11-21 01:45:31.600725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.758 [2024-11-21 01:45:31.673648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.758 [2024-11-21 01:45:31.673680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:47.758 [2024-11-21 01:45:31.673697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.883 ms 00:19:47.758 [2024-11-21 01:45:31.673705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.758 [2024-11-21 01:45:31.698234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.758 [2024-11-21 01:45:31.698269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:47.758 [2024-11-21 01:45:31.698283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.429 ms 00:19:47.758 [2024-11-21 01:45:31.698291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.020 [2024-11-21 01:45:31.721376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.020 [2024-11-21 01:45:31.721419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:48.020 [2024-11-21 01:45:31.721432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.019 ms 00:19:48.020 [2024-11-21 01:45:31.721439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.020 [2024-11-21 01:45:31.745345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.020 [2024-11-21 01:45:31.745478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:48.020 [2024-11-21 01:45:31.745498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.829 ms 00:19:48.020 [2024-11-21 01:45:31.745519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.020 [2024-11-21 01:45:31.745585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.020 [2024-11-21 01:45:31.745598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:48.020 [2024-11-21 01:45:31.745627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:48.020 [2024-11-21 01:45:31.745637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.020 [2024-11-21 01:45:31.745718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.020 [2024-11-21 01:45:31.745727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:48.020 [2024-11-21 01:45:31.745737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:48.020 [2024-11-21 01:45:31.745745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.020 [2024-11-21 01:45:31.746641] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:48.020 [2024-11-21 01:45:31.749659] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2998.492 ms, result 0 00:19:48.020 [2024-11-21 01:45:31.750919] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:48.020 { 00:19:48.020 "name": "ftl0", 00:19:48.020 "uuid": "45bfb092-704a-4e18-9717-d1701cdaabcd" 00:19:48.020 } 00:19:48.020 01:45:31 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:48.020 01:45:31 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:48.020 01:45:31 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:48.020 01:45:31 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:48.020 01:45:31 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:48.020 01:45:31 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:48.020 01:45:31 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:48.020 01:45:31 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:48.280 [ 00:19:48.280 { 00:19:48.280 "name": "ftl0", 00:19:48.280 "aliases": [ 00:19:48.280 "45bfb092-704a-4e18-9717-d1701cdaabcd" 00:19:48.280 ], 00:19:48.280 "product_name": "FTL disk", 00:19:48.280 "block_size": 4096, 00:19:48.280 "num_blocks": 23592960, 00:19:48.280 "uuid": "45bfb092-704a-4e18-9717-d1701cdaabcd", 00:19:48.280 "assigned_rate_limits": { 00:19:48.280 "rw_ios_per_sec": 0, 00:19:48.280 "rw_mbytes_per_sec": 0, 00:19:48.280 "r_mbytes_per_sec": 0, 00:19:48.280 "w_mbytes_per_sec": 0 00:19:48.280 }, 00:19:48.280 "claimed": false, 00:19:48.280 "zoned": false, 00:19:48.280 "supported_io_types": { 00:19:48.280 "read": true, 00:19:48.280 "write": true, 00:19:48.280 "unmap": true, 00:19:48.280 "flush": true, 00:19:48.280 "reset": false, 00:19:48.280 "nvme_admin": false, 00:19:48.280 "nvme_io": false, 00:19:48.280 "nvme_io_md": false, 00:19:48.280 "write_zeroes": true, 00:19:48.280 "zcopy": false, 00:19:48.280 "get_zone_info": false, 00:19:48.280 "zone_management": false, 00:19:48.280 "zone_append": false, 00:19:48.280 "compare": false, 00:19:48.280 "compare_and_write": false, 00:19:48.280 "abort": false, 00:19:48.280 "seek_hole": false, 00:19:48.280 "seek_data": false, 00:19:48.280 "copy": false, 00:19:48.280 "nvme_iov_md": false 00:19:48.280 }, 00:19:48.280 "driver_specific": { 00:19:48.280 "ftl": { 00:19:48.280 "base_bdev": "2ce0c1e5-8e53-4412-890b-eb1215bf91ce", 00:19:48.281 "cache": "nvc0n1p0" 00:19:48.281 } 00:19:48.281 } 00:19:48.281 } 00:19:48.281 ] 00:19:48.281 01:45:32 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:48.281 01:45:32 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:48.281 01:45:32 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:48.540 01:45:32 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:48.540 01:45:32 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:48.798 01:45:32 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:48.798 { 00:19:48.798 "name": "ftl0", 00:19:48.798 "aliases": [ 00:19:48.798 "45bfb092-704a-4e18-9717-d1701cdaabcd" 00:19:48.798 ], 00:19:48.798 "product_name": "FTL disk", 00:19:48.798 "block_size": 4096, 00:19:48.798 "num_blocks": 23592960, 00:19:48.798 "uuid": "45bfb092-704a-4e18-9717-d1701cdaabcd", 00:19:48.798 "assigned_rate_limits": { 00:19:48.798 "rw_ios_per_sec": 0, 00:19:48.798 "rw_mbytes_per_sec": 0, 00:19:48.798 "r_mbytes_per_sec": 0, 00:19:48.798 "w_mbytes_per_sec": 0 00:19:48.798 }, 00:19:48.798 "claimed": false, 00:19:48.798 "zoned": false, 00:19:48.798 "supported_io_types": { 00:19:48.798 "read": true, 00:19:48.798 "write": true, 00:19:48.798 "unmap": true, 00:19:48.798 "flush": true, 00:19:48.798 "reset": false, 00:19:48.798 "nvme_admin": false, 00:19:48.798 "nvme_io": false, 00:19:48.798 "nvme_io_md": false, 00:19:48.798 "write_zeroes": true, 00:19:48.798 "zcopy": false, 00:19:48.798 "get_zone_info": false, 00:19:48.798 "zone_management": false, 00:19:48.798 "zone_append": false, 00:19:48.798 "compare": false, 00:19:48.798 "compare_and_write": false, 00:19:48.798 "abort": false, 00:19:48.798 "seek_hole": false, 00:19:48.798 "seek_data": false, 00:19:48.798 "copy": false, 00:19:48.798 "nvme_iov_md": false 00:19:48.798 }, 00:19:48.798 "driver_specific": { 00:19:48.798 "ftl": { 00:19:48.798 "base_bdev": "2ce0c1e5-8e53-4412-890b-eb1215bf91ce", 00:19:48.798 "cache": "nvc0n1p0" 00:19:48.798 } 00:19:48.798 } 00:19:48.798 } 00:19:48.798 ]' 00:19:48.798 01:45:32 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:48.798 01:45:32 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:48.798 01:45:32 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:49.058 [2024-11-21 01:45:32.762290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.058 [2024-11-21 01:45:32.762323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:49.058 [2024-11-21 01:45:32.762335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:49.058 [2024-11-21 01:45:32.762345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.058 [2024-11-21 01:45:32.762375] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:49.058 [2024-11-21 01:45:32.764583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.058 [2024-11-21 01:45:32.764706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:49.058 [2024-11-21 01:45:32.764729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.194 ms 00:19:49.058 [2024-11-21 01:45:32.764736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.058 [2024-11-21 01:45:32.765193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.058 [2024-11-21 01:45:32.765205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:49.058 [2024-11-21 01:45:32.765214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:19:49.058 [2024-11-21 01:45:32.765220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.058 [2024-11-21 01:45:32.767982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.058 [2024-11-21 01:45:32.768064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:49.058 [2024-11-21 01:45:32.768077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.732 ms 00:19:49.058 [2024-11-21 01:45:32.768085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.058 [2024-11-21 01:45:32.773482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.058 [2024-11-21 01:45:32.773505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:49.058 [2024-11-21 01:45:32.773515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.356 ms 00:19:49.058 [2024-11-21 01:45:32.773523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.058 [2024-11-21 01:45:32.791537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.058 [2024-11-21 01:45:32.791652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:49.058 [2024-11-21 01:45:32.791671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.934 ms 00:19:49.058 [2024-11-21 01:45:32.791676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.058 [2024-11-21 01:45:32.804003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.058 [2024-11-21 01:45:32.804031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:49.058 [2024-11-21 01:45:32.804043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.273 ms 00:19:49.058 [2024-11-21 01:45:32.804052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.058 [2024-11-21 01:45:32.804220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.058 [2024-11-21 01:45:32.804230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:49.058 [2024-11-21 01:45:32.804238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:19:49.058 [2024-11-21 01:45:32.804245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.058 [2024-11-21 01:45:32.822106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.058 [2024-11-21 01:45:32.822131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:49.058 [2024-11-21 01:45:32.822140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.836 ms 00:19:49.058 [2024-11-21 01:45:32.822146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.058 [2024-11-21 01:45:32.840039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.058 [2024-11-21 01:45:32.840063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:49.058 [2024-11-21 01:45:32.840075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.830 ms 00:19:49.058 [2024-11-21 01:45:32.840081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.058 [2024-11-21 01:45:32.857254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.058 [2024-11-21 01:45:32.857289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:49.058 [2024-11-21 01:45:32.857299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.111 ms 00:19:49.058 [2024-11-21 01:45:32.857305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.058 [2024-11-21 01:45:32.874644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.058 [2024-11-21 01:45:32.874742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:49.058 [2024-11-21 01:45:32.874758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.253 ms 00:19:49.058 [2024-11-21 01:45:32.874763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.058 [2024-11-21 01:45:32.874813] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:49.058 [2024-11-21 01:45:32.874825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.874999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.875005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.875012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.875017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:49.058 [2024-11-21 01:45:32.875027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:49.059 [2024-11-21 01:45:32.875524] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:49.059 [2024-11-21 01:45:32.875533] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 45bfb092-704a-4e18-9717-d1701cdaabcd 00:19:49.059 [2024-11-21 01:45:32.875539] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:49.059 [2024-11-21 01:45:32.875547] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:49.059 [2024-11-21 01:45:32.875552] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:49.059 [2024-11-21 01:45:32.875559] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:49.059 [2024-11-21 01:45:32.875566] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:49.059 [2024-11-21 01:45:32.875732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:49.059 [2024-11-21 01:45:32.875741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:49.059 [2024-11-21 01:45:32.875747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:49.059 [2024-11-21 01:45:32.875752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:49.059 [2024-11-21 01:45:32.875759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.059 [2024-11-21 01:45:32.875765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:49.059 [2024-11-21 01:45:32.875774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:19:49.059 [2024-11-21 01:45:32.875780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.059 [2024-11-21 01:45:32.885770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.059 [2024-11-21 01:45:32.885859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:49.059 [2024-11-21 01:45:32.885878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.962 ms 00:19:49.059 [2024-11-21 01:45:32.885884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.059 [2024-11-21 01:45:32.886205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.059 [2024-11-21 01:45:32.886214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:49.059 [2024-11-21 01:45:32.886223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:19:49.060 [2024-11-21 01:45:32.886229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.060 [2024-11-21 01:45:32.922542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.060 [2024-11-21 01:45:32.922571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:49.060 [2024-11-21 01:45:32.922582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.060 [2024-11-21 01:45:32.922589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.060 [2024-11-21 01:45:32.922689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.060 [2024-11-21 01:45:32.922697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:49.060 [2024-11-21 01:45:32.922706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.060 [2024-11-21 01:45:32.922712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.060 [2024-11-21 01:45:32.922764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.060 [2024-11-21 01:45:32.922772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:49.060 [2024-11-21 01:45:32.922783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.060 [2024-11-21 01:45:32.922789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.060 [2024-11-21 01:45:32.922817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.060 [2024-11-21 01:45:32.922824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:49.060 [2024-11-21 01:45:32.922831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.060 [2024-11-21 01:45:32.922837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.060 [2024-11-21 01:45:32.990146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.060 [2024-11-21 01:45:32.990184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:49.060 [2024-11-21 01:45:32.990196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.060 [2024-11-21 01:45:32.990203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.319 [2024-11-21 01:45:33.041331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.319 [2024-11-21 01:45:33.041367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:49.319 [2024-11-21 01:45:33.041378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.319 [2024-11-21 01:45:33.041384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.319 [2024-11-21 01:45:33.041461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.319 [2024-11-21 01:45:33.041470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:49.319 [2024-11-21 01:45:33.041493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.319 [2024-11-21 01:45:33.041502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.319 [2024-11-21 01:45:33.041557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.319 [2024-11-21 01:45:33.041564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:49.319 [2024-11-21 01:45:33.041572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.319 [2024-11-21 01:45:33.041579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.319 [2024-11-21 01:45:33.041682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.319 [2024-11-21 01:45:33.041692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:49.319 [2024-11-21 01:45:33.041700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.319 [2024-11-21 01:45:33.041707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.319 [2024-11-21 01:45:33.041750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.319 [2024-11-21 01:45:33.041758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:49.319 [2024-11-21 01:45:33.041766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.319 [2024-11-21 01:45:33.041772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.319 [2024-11-21 01:45:33.041819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.319 [2024-11-21 01:45:33.041826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:49.319 [2024-11-21 01:45:33.041836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.319 [2024-11-21 01:45:33.041842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.319 [2024-11-21 01:45:33.041896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:49.319 [2024-11-21 01:45:33.041904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:49.319 [2024-11-21 01:45:33.041912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:49.319 [2024-11-21 01:45:33.041917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.319 [2024-11-21 01:45:33.042086] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 279.773 ms, result 0 00:19:49.319 true 00:19:49.319 01:45:33 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76182 00:19:49.319 01:45:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76182 ']' 00:19:49.319 01:45:33 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76182 00:19:49.319 01:45:33 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:49.319 01:45:33 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:49.319 01:45:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76182 00:19:49.319 killing process with pid 76182 00:19:49.319 01:45:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:49.319 01:45:33 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:49.319 01:45:33 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76182' 00:19:49.319 01:45:33 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76182 00:19:49.319 01:45:33 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76182 00:19:55.903 01:45:39 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:56.847 65536+0 records in 00:19:56.847 65536+0 records out 00:19:56.847 268435456 bytes (268 MB, 256 MiB) copied, 1.10421 s, 243 MB/s 00:19:56.847 01:45:40 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:56.847 [2024-11-21 01:45:40.571375] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:19:56.847 [2024-11-21 01:45:40.571516] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76364 ] 00:19:56.847 [2024-11-21 01:45:40.735609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.108 [2024-11-21 01:45:40.855444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.369 [2024-11-21 01:45:41.143889] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:57.369 [2024-11-21 01:45:41.143969] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:57.369 [2024-11-21 01:45:41.305645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.369 [2024-11-21 01:45:41.305692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:57.369 [2024-11-21 01:45:41.305706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:57.369 [2024-11-21 01:45:41.305714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.369 [2024-11-21 01:45:41.308535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.369 [2024-11-21 01:45:41.308570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:57.369 [2024-11-21 01:45:41.308581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.802 ms 00:19:57.369 [2024-11-21 01:45:41.308589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.369 [2024-11-21 01:45:41.308691] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:57.369 [2024-11-21 01:45:41.309373] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:57.369 [2024-11-21 01:45:41.309400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.369 [2024-11-21 01:45:41.309409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:57.369 [2024-11-21 01:45:41.309418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:19:57.369 [2024-11-21 01:45:41.309425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.369 [2024-11-21 01:45:41.310932] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:57.632 [2024-11-21 01:45:41.324064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.632 [2024-11-21 01:45:41.324222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:57.632 [2024-11-21 01:45:41.324241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.134 ms 00:19:57.632 [2024-11-21 01:45:41.324251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.632 [2024-11-21 01:45:41.324340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.632 [2024-11-21 01:45:41.324352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:57.632 [2024-11-21 01:45:41.324361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:57.632 [2024-11-21 01:45:41.324369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.632 [2024-11-21 01:45:41.330942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.632 [2024-11-21 01:45:41.331058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:57.632 [2024-11-21 01:45:41.331073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.530 ms 00:19:57.632 [2024-11-21 01:45:41.331081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.632 [2024-11-21 01:45:41.331171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.632 [2024-11-21 01:45:41.331182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:57.632 [2024-11-21 01:45:41.331191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:57.632 [2024-11-21 01:45:41.331199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.632 [2024-11-21 01:45:41.331223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.632 [2024-11-21 01:45:41.331236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:57.632 [2024-11-21 01:45:41.331244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:57.632 [2024-11-21 01:45:41.331252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.632 [2024-11-21 01:45:41.331274] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:57.632 [2024-11-21 01:45:41.334867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.632 [2024-11-21 01:45:41.334896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:57.632 [2024-11-21 01:45:41.334905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.600 ms 00:19:57.632 [2024-11-21 01:45:41.334913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.632 [2024-11-21 01:45:41.334961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.632 [2024-11-21 01:45:41.334971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:57.632 [2024-11-21 01:45:41.334980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:57.632 [2024-11-21 01:45:41.334987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.632 [2024-11-21 01:45:41.335005] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:57.632 [2024-11-21 01:45:41.335026] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:57.632 [2024-11-21 01:45:41.335062] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:57.632 [2024-11-21 01:45:41.335079] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:57.632 [2024-11-21 01:45:41.335183] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:57.632 [2024-11-21 01:45:41.335194] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:57.632 [2024-11-21 01:45:41.335205] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:57.632 [2024-11-21 01:45:41.335215] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:57.632 [2024-11-21 01:45:41.335227] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:57.632 [2024-11-21 01:45:41.335235] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:57.632 [2024-11-21 01:45:41.335242] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:57.632 [2024-11-21 01:45:41.335250] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:57.632 [2024-11-21 01:45:41.335257] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:57.632 [2024-11-21 01:45:41.335265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.632 [2024-11-21 01:45:41.335273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:57.632 [2024-11-21 01:45:41.335281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:19:57.632 [2024-11-21 01:45:41.335288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.632 [2024-11-21 01:45:41.335389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.632 [2024-11-21 01:45:41.335399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:57.632 [2024-11-21 01:45:41.335411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:57.632 [2024-11-21 01:45:41.335418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.632 [2024-11-21 01:45:41.335520] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:57.632 [2024-11-21 01:45:41.335530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:57.632 [2024-11-21 01:45:41.335538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:57.632 [2024-11-21 01:45:41.335546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.632 [2024-11-21 01:45:41.335554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:57.632 [2024-11-21 01:45:41.335561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:57.632 [2024-11-21 01:45:41.335568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:57.632 [2024-11-21 01:45:41.335576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:57.632 [2024-11-21 01:45:41.335583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:57.632 [2024-11-21 01:45:41.335590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:57.632 [2024-11-21 01:45:41.335597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:57.632 [2024-11-21 01:45:41.335604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:57.632 [2024-11-21 01:45:41.335627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:57.632 [2024-11-21 01:45:41.335641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:57.632 [2024-11-21 01:45:41.335649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:57.632 [2024-11-21 01:45:41.335656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.632 [2024-11-21 01:45:41.335663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:57.632 [2024-11-21 01:45:41.335670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:57.632 [2024-11-21 01:45:41.335679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.632 [2024-11-21 01:45:41.335687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:57.632 [2024-11-21 01:45:41.335694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:57.632 [2024-11-21 01:45:41.335700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:57.632 [2024-11-21 01:45:41.335707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:57.632 [2024-11-21 01:45:41.335714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:57.632 [2024-11-21 01:45:41.335721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:57.632 [2024-11-21 01:45:41.335728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:57.632 [2024-11-21 01:45:41.335736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:57.632 [2024-11-21 01:45:41.335744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:57.632 [2024-11-21 01:45:41.335751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:57.632 [2024-11-21 01:45:41.335758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:57.632 [2024-11-21 01:45:41.335764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:57.632 [2024-11-21 01:45:41.335771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:57.632 [2024-11-21 01:45:41.335777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:57.632 [2024-11-21 01:45:41.335784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:57.632 [2024-11-21 01:45:41.335792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:57.632 [2024-11-21 01:45:41.335798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:57.632 [2024-11-21 01:45:41.335805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:57.632 [2024-11-21 01:45:41.335813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:57.632 [2024-11-21 01:45:41.335820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:57.632 [2024-11-21 01:45:41.335827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.633 [2024-11-21 01:45:41.335834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:57.633 [2024-11-21 01:45:41.335841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:57.633 [2024-11-21 01:45:41.335847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.633 [2024-11-21 01:45:41.335853] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:57.633 [2024-11-21 01:45:41.335860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:57.633 [2024-11-21 01:45:41.335867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:57.633 [2024-11-21 01:45:41.335877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.633 [2024-11-21 01:45:41.335884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:57.633 [2024-11-21 01:45:41.335891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:57.633 [2024-11-21 01:45:41.335898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:57.633 [2024-11-21 01:45:41.335909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:57.633 [2024-11-21 01:45:41.335915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:57.633 [2024-11-21 01:45:41.335922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:57.633 [2024-11-21 01:45:41.335930] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:57.633 [2024-11-21 01:45:41.335940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:57.633 [2024-11-21 01:45:41.335949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:57.633 [2024-11-21 01:45:41.335956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:57.633 [2024-11-21 01:45:41.335963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:57.633 [2024-11-21 01:45:41.335970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:57.633 [2024-11-21 01:45:41.335978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:57.633 [2024-11-21 01:45:41.335986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:57.633 [2024-11-21 01:45:41.335993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:57.633 [2024-11-21 01:45:41.335999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:57.633 [2024-11-21 01:45:41.336007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:57.633 [2024-11-21 01:45:41.336014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:57.633 [2024-11-21 01:45:41.336020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:57.633 [2024-11-21 01:45:41.336027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:57.633 [2024-11-21 01:45:41.336034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:57.633 [2024-11-21 01:45:41.336042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:57.633 [2024-11-21 01:45:41.336048] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:57.633 [2024-11-21 01:45:41.336056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:57.633 [2024-11-21 01:45:41.336063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:57.633 [2024-11-21 01:45:41.336070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:57.633 [2024-11-21 01:45:41.336078] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:57.633 [2024-11-21 01:45:41.336085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:57.633 [2024-11-21 01:45:41.336092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.633 [2024-11-21 01:45:41.336100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:57.633 [2024-11-21 01:45:41.336111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:19:57.633 [2024-11-21 01:45:41.336119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.633 [2024-11-21 01:45:41.365210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.633 [2024-11-21 01:45:41.365242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:57.633 [2024-11-21 01:45:41.365255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.041 ms 00:19:57.633 [2024-11-21 01:45:41.365262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.633 [2024-11-21 01:45:41.365384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.633 [2024-11-21 01:45:41.365398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:57.633 [2024-11-21 01:45:41.365407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:57.633 [2024-11-21 01:45:41.365415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.633 [2024-11-21 01:45:41.407341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.633 [2024-11-21 01:45:41.407379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:57.633 [2024-11-21 01:45:41.407391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.905 ms 00:19:57.633 [2024-11-21 01:45:41.407402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.633 [2024-11-21 01:45:41.407491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.633 [2024-11-21 01:45:41.407503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:57.633 [2024-11-21 01:45:41.407513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:57.633 [2024-11-21 01:45:41.407521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.633 [2024-11-21 01:45:41.407970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.633 [2024-11-21 01:45:41.407987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:57.633 [2024-11-21 01:45:41.407997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:19:57.633 [2024-11-21 01:45:41.408012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.633 [2024-11-21 01:45:41.408147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.633 [2024-11-21 01:45:41.408158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:57.633 [2024-11-21 01:45:41.408166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:19:57.633 [2024-11-21 01:45:41.408173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.633 [2024-11-21 01:45:41.423076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.633 [2024-11-21 01:45:41.423105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:57.633 [2024-11-21 01:45:41.423115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.883 ms 00:19:57.633 [2024-11-21 01:45:41.423122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.633 [2024-11-21 01:45:41.436771] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:57.633 [2024-11-21 01:45:41.436904] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:57.633 [2024-11-21 01:45:41.436919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.633 [2024-11-21 01:45:41.436928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:57.633 [2024-11-21 01:45:41.436937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.699 ms 00:19:57.633 [2024-11-21 01:45:41.436944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.633 [2024-11-21 01:45:41.461658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.633 [2024-11-21 01:45:41.461691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:57.633 [2024-11-21 01:45:41.461708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.647 ms 00:19:57.633 [2024-11-21 01:45:41.461716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.633 [2024-11-21 01:45:41.473706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.633 [2024-11-21 01:45:41.473737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:57.633 [2024-11-21 01:45:41.473747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.917 ms 00:19:57.633 [2024-11-21 01:45:41.473755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.633 [2024-11-21 01:45:41.485587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.633 [2024-11-21 01:45:41.485631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:57.633 [2024-11-21 01:45:41.485641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.769 ms 00:19:57.633 [2024-11-21 01:45:41.485649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.633 [2024-11-21 01:45:41.486264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.633 [2024-11-21 01:45:41.486288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:57.633 [2024-11-21 01:45:41.486298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:19:57.633 [2024-11-21 01:45:41.486306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.633 [2024-11-21 01:45:41.545643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.633 [2024-11-21 01:45:41.545684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:57.633 [2024-11-21 01:45:41.545697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.314 ms 00:19:57.633 [2024-11-21 01:45:41.545706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.633 [2024-11-21 01:45:41.556325] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:57.633 [2024-11-21 01:45:41.573088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.633 [2024-11-21 01:45:41.573124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:57.633 [2024-11-21 01:45:41.573137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.293 ms 00:19:57.633 [2024-11-21 01:45:41.573147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.633 [2024-11-21 01:45:41.573228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.633 [2024-11-21 01:45:41.573242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:57.633 [2024-11-21 01:45:41.573251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:57.633 [2024-11-21 01:45:41.573259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.634 [2024-11-21 01:45:41.573322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.634 [2024-11-21 01:45:41.573332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:57.634 [2024-11-21 01:45:41.573341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:57.634 [2024-11-21 01:45:41.573349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.634 [2024-11-21 01:45:41.573373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.634 [2024-11-21 01:45:41.573382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:57.634 [2024-11-21 01:45:41.573393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:57.634 [2024-11-21 01:45:41.573400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.634 [2024-11-21 01:45:41.573436] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:57.634 [2024-11-21 01:45:41.573447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.634 [2024-11-21 01:45:41.573454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:57.634 [2024-11-21 01:45:41.573463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:57.634 [2024-11-21 01:45:41.573472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.894 [2024-11-21 01:45:41.597836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.894 [2024-11-21 01:45:41.597875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:57.894 [2024-11-21 01:45:41.597885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.343 ms 00:19:57.894 [2024-11-21 01:45:41.597894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.894 [2024-11-21 01:45:41.597989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.894 [2024-11-21 01:45:41.598000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:57.894 [2024-11-21 01:45:41.598010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:57.894 [2024-11-21 01:45:41.598018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.894 [2024-11-21 01:45:41.598885] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:57.894 [2024-11-21 01:45:41.601850] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 292.950 ms, result 0 00:19:57.894 [2024-11-21 01:45:41.602678] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:57.894 [2024-11-21 01:45:41.615607] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:58.838  [2024-11-21T01:45:43.740Z] Copying: 16/256 [MB] (16 MBps) [2024-11-21T01:45:44.679Z] Copying: 35/256 [MB] (18 MBps) [2024-11-21T01:45:45.623Z] Copying: 54/256 [MB] (19 MBps) [2024-11-21T01:45:47.009Z] Copying: 71/256 [MB] (17 MBps) [2024-11-21T01:45:47.942Z] Copying: 87/256 [MB] (16 MBps) [2024-11-21T01:45:48.875Z] Copying: 100/256 [MB] (12 MBps) [2024-11-21T01:45:49.812Z] Copying: 114/256 [MB] (13 MBps) [2024-11-21T01:45:50.754Z] Copying: 128/256 [MB] (14 MBps) [2024-11-21T01:45:51.698Z] Copying: 141724/262144 [kB] (10032 kBps) [2024-11-21T01:45:52.642Z] Copying: 151800/262144 [kB] (10076 kBps) [2024-11-21T01:45:54.028Z] Copying: 158/256 [MB] (10 MBps) [2024-11-21T01:45:54.970Z] Copying: 170/256 [MB] (12 MBps) [2024-11-21T01:45:55.912Z] Copying: 184/256 [MB] (13 MBps) [2024-11-21T01:45:56.855Z] Copying: 197/256 [MB] (13 MBps) [2024-11-21T01:45:57.796Z] Copying: 215/256 [MB] (17 MBps) [2024-11-21T01:45:58.736Z] Copying: 235/256 [MB] (19 MBps) [2024-11-21T01:45:58.736Z] Copying: 255/256 [MB] (20 MBps) [2024-11-21T01:45:58.736Z] Copying: 256/256 [MB] (average 15 MBps)[2024-11-21 01:45:58.654464] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:14.779 [2024-11-21 01:45:58.664881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.779 [2024-11-21 01:45:58.665071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:14.779 [2024-11-21 01:45:58.665095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:14.779 [2024-11-21 01:45:58.665105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.779 [2024-11-21 01:45:58.665136] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:14.779 [2024-11-21 01:45:58.668172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.780 [2024-11-21 01:45:58.668341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:14.780 [2024-11-21 01:45:58.668362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.019 ms 00:20:14.780 [2024-11-21 01:45:58.668370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.780 [2024-11-21 01:45:58.671513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.780 [2024-11-21 01:45:58.671686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:14.780 [2024-11-21 01:45:58.671706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.111 ms 00:20:14.780 [2024-11-21 01:45:58.671715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.780 [2024-11-21 01:45:58.679722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.780 [2024-11-21 01:45:58.679768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:14.780 [2024-11-21 01:45:58.679787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.983 ms 00:20:14.780 [2024-11-21 01:45:58.679795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.780 [2024-11-21 01:45:58.686760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.780 [2024-11-21 01:45:58.686936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:14.780 [2024-11-21 01:45:58.686955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.921 ms 00:20:14.780 [2024-11-21 01:45:58.686963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.780 [2024-11-21 01:45:58.712822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.780 [2024-11-21 01:45:58.712875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:14.780 [2024-11-21 01:45:58.712888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.793 ms 00:20:14.780 [2024-11-21 01:45:58.712896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.780 [2024-11-21 01:45:58.729467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.780 [2024-11-21 01:45:58.729520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:14.780 [2024-11-21 01:45:58.729541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.520 ms 00:20:14.780 [2024-11-21 01:45:58.729552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.780 [2024-11-21 01:45:58.729731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.780 [2024-11-21 01:45:58.729743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:14.780 [2024-11-21 01:45:58.729753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:14.780 [2024-11-21 01:45:58.729760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.041 [2024-11-21 01:45:58.755741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.041 [2024-11-21 01:45:58.755799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:15.041 [2024-11-21 01:45:58.755811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.964 ms 00:20:15.042 [2024-11-21 01:45:58.755819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.042 [2024-11-21 01:45:58.781889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.042 [2024-11-21 01:45:58.781937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:15.042 [2024-11-21 01:45:58.781949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.981 ms 00:20:15.042 [2024-11-21 01:45:58.781956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.042 [2024-11-21 01:45:58.807201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.042 [2024-11-21 01:45:58.807249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:15.042 [2024-11-21 01:45:58.807260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.182 ms 00:20:15.042 [2024-11-21 01:45:58.807267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.042 [2024-11-21 01:45:58.831979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.042 [2024-11-21 01:45:58.832029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:15.042 [2024-11-21 01:45:58.832040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.632 ms 00:20:15.042 [2024-11-21 01:45:58.832047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.042 [2024-11-21 01:45:58.832096] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:15.042 [2024-11-21 01:45:58.832119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:15.042 [2024-11-21 01:45:58.832719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:15.043 [2024-11-21 01:45:58.832959] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:15.043 [2024-11-21 01:45:58.832968] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 45bfb092-704a-4e18-9717-d1701cdaabcd 00:20:15.043 [2024-11-21 01:45:58.832976] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:15.043 [2024-11-21 01:45:58.832983] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:15.043 [2024-11-21 01:45:58.832991] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:15.043 [2024-11-21 01:45:58.832999] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:15.043 [2024-11-21 01:45:58.833006] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:15.043 [2024-11-21 01:45:58.833014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:15.043 [2024-11-21 01:45:58.833021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:15.043 [2024-11-21 01:45:58.833027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:15.043 [2024-11-21 01:45:58.833034] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:15.043 [2024-11-21 01:45:58.833042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.043 [2024-11-21 01:45:58.833049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:15.043 [2024-11-21 01:45:58.833061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:20:15.043 [2024-11-21 01:45:58.833069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.043 [2024-11-21 01:45:58.846715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.043 [2024-11-21 01:45:58.846758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:15.043 [2024-11-21 01:45:58.846769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.614 ms 00:20:15.043 [2024-11-21 01:45:58.846777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.043 [2024-11-21 01:45:58.847176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.043 [2024-11-21 01:45:58.847201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:15.043 [2024-11-21 01:45:58.847211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:20:15.043 [2024-11-21 01:45:58.847219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.043 [2024-11-21 01:45:58.886571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.043 [2024-11-21 01:45:58.886646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.043 [2024-11-21 01:45:58.886659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.043 [2024-11-21 01:45:58.886667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.043 [2024-11-21 01:45:58.886772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.043 [2024-11-21 01:45:58.886785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.043 [2024-11-21 01:45:58.886794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.043 [2024-11-21 01:45:58.886802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.043 [2024-11-21 01:45:58.886854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.043 [2024-11-21 01:45:58.886864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.043 [2024-11-21 01:45:58.886872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.043 [2024-11-21 01:45:58.886880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.043 [2024-11-21 01:45:58.886898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.043 [2024-11-21 01:45:58.886907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.043 [2024-11-21 01:45:58.886918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.043 [2024-11-21 01:45:58.886925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.043 [2024-11-21 01:45:58.971403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.043 [2024-11-21 01:45:58.971461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.043 [2024-11-21 01:45:58.971474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.043 [2024-11-21 01:45:58.971483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.304 [2024-11-21 01:45:59.040709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.304 [2024-11-21 01:45:59.040766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.304 [2024-11-21 01:45:59.040785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.304 [2024-11-21 01:45:59.040795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.304 [2024-11-21 01:45:59.040876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.304 [2024-11-21 01:45:59.040887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.304 [2024-11-21 01:45:59.040896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.304 [2024-11-21 01:45:59.040906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.304 [2024-11-21 01:45:59.040939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.304 [2024-11-21 01:45:59.040949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.304 [2024-11-21 01:45:59.040958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.304 [2024-11-21 01:45:59.040969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.304 [2024-11-21 01:45:59.041074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.304 [2024-11-21 01:45:59.041085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.304 [2024-11-21 01:45:59.041094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.304 [2024-11-21 01:45:59.041102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.304 [2024-11-21 01:45:59.041138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.304 [2024-11-21 01:45:59.041147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:15.304 [2024-11-21 01:45:59.041155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.304 [2024-11-21 01:45:59.041164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.304 [2024-11-21 01:45:59.041213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.304 [2024-11-21 01:45:59.041224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.304 [2024-11-21 01:45:59.041232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.304 [2024-11-21 01:45:59.041241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.304 [2024-11-21 01:45:59.041322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.304 [2024-11-21 01:45:59.041334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.304 [2024-11-21 01:45:59.041342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.304 [2024-11-21 01:45:59.041354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.304 [2024-11-21 01:45:59.041518] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 376.626 ms, result 0 00:20:16.244 00:20:16.244 00:20:16.244 01:46:00 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76566 00:20:16.244 01:46:00 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76566 00:20:16.244 01:46:00 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:16.244 01:46:00 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76566 ']' 00:20:16.244 01:46:00 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.244 01:46:00 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.244 01:46:00 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.244 01:46:00 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.244 01:46:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:16.506 [2024-11-21 01:46:00.248224] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:20:16.506 [2024-11-21 01:46:00.248656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76566 ] 00:20:16.506 [2024-11-21 01:46:00.416521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.767 [2024-11-21 01:46:00.542704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.341 01:46:01 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.341 01:46:01 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:17.341 01:46:01 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:17.601 [2024-11-21 01:46:01.452127] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.601 [2024-11-21 01:46:01.452404] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.865 [2024-11-21 01:46:01.632211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.865 [2024-11-21 01:46:01.632274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:17.865 [2024-11-21 01:46:01.632293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:17.865 [2024-11-21 01:46:01.632302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.865 [2024-11-21 01:46:01.635330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.865 [2024-11-21 01:46:01.635371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.865 [2024-11-21 01:46:01.635383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.004 ms 00:20:17.865 [2024-11-21 01:46:01.635392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.865 [2024-11-21 01:46:01.635522] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:17.865 [2024-11-21 01:46:01.636336] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:17.865 [2024-11-21 01:46:01.636377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.865 [2024-11-21 01:46:01.636386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.865 [2024-11-21 01:46:01.636397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:20:17.865 [2024-11-21 01:46:01.636405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.865 [2024-11-21 01:46:01.638403] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:17.865 [2024-11-21 01:46:01.653400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.865 [2024-11-21 01:46:01.653454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:17.865 [2024-11-21 01:46:01.653469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.005 ms 00:20:17.865 [2024-11-21 01:46:01.653480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.865 [2024-11-21 01:46:01.653597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.865 [2024-11-21 01:46:01.653637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:17.865 [2024-11-21 01:46:01.653648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:17.865 [2024-11-21 01:46:01.653660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.865 [2024-11-21 01:46:01.666075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.865 [2024-11-21 01:46:01.666157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.865 [2024-11-21 01:46:01.666174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.348 ms 00:20:17.865 [2024-11-21 01:46:01.666186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.865 [2024-11-21 01:46:01.666422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.865 [2024-11-21 01:46:01.666439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.865 [2024-11-21 01:46:01.666450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:17.865 [2024-11-21 01:46:01.666461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.865 [2024-11-21 01:46:01.666504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.865 [2024-11-21 01:46:01.666515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:17.865 [2024-11-21 01:46:01.666524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:17.865 [2024-11-21 01:46:01.666534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.865 [2024-11-21 01:46:01.666565] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:17.865 [2024-11-21 01:46:01.671609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.865 [2024-11-21 01:46:01.671667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.865 [2024-11-21 01:46:01.671682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.052 ms 00:20:17.865 [2024-11-21 01:46:01.671691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.865 [2024-11-21 01:46:01.671802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.865 [2024-11-21 01:46:01.671816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:17.865 [2024-11-21 01:46:01.671829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:17.865 [2024-11-21 01:46:01.671840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.865 [2024-11-21 01:46:01.671868] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:17.865 [2024-11-21 01:46:01.671925] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:17.865 [2024-11-21 01:46:01.671978] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:17.865 [2024-11-21 01:46:01.671998] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:17.865 [2024-11-21 01:46:01.672119] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:17.865 [2024-11-21 01:46:01.672133] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:17.865 [2024-11-21 01:46:01.672151] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:17.865 [2024-11-21 01:46:01.672167] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:17.865 [2024-11-21 01:46:01.672180] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:17.865 [2024-11-21 01:46:01.672190] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:17.865 [2024-11-21 01:46:01.672202] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:17.865 [2024-11-21 01:46:01.672211] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:17.865 [2024-11-21 01:46:01.672224] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:17.865 [2024-11-21 01:46:01.672232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.865 [2024-11-21 01:46:01.672242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:17.865 [2024-11-21 01:46:01.672252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:20:17.865 [2024-11-21 01:46:01.672262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.865 [2024-11-21 01:46:01.672354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.865 [2024-11-21 01:46:01.672366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:17.865 [2024-11-21 01:46:01.672374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:17.866 [2024-11-21 01:46:01.672386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.866 [2024-11-21 01:46:01.672491] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:17.866 [2024-11-21 01:46:01.672507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:17.866 [2024-11-21 01:46:01.672517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.866 [2024-11-21 01:46:01.672528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.866 [2024-11-21 01:46:01.672539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:17.866 [2024-11-21 01:46:01.672548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:17.866 [2024-11-21 01:46:01.672554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:17.866 [2024-11-21 01:46:01.672568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:17.866 [2024-11-21 01:46:01.672575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:17.866 [2024-11-21 01:46:01.672584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.866 [2024-11-21 01:46:01.672593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:17.866 [2024-11-21 01:46:01.672602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:17.866 [2024-11-21 01:46:01.672609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.866 [2024-11-21 01:46:01.672650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:17.866 [2024-11-21 01:46:01.672658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:17.866 [2024-11-21 01:46:01.672668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.866 [2024-11-21 01:46:01.672676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:17.866 [2024-11-21 01:46:01.672685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:17.866 [2024-11-21 01:46:01.672692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.866 [2024-11-21 01:46:01.672701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:17.866 [2024-11-21 01:46:01.672717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:17.866 [2024-11-21 01:46:01.672730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.866 [2024-11-21 01:46:01.672738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:17.866 [2024-11-21 01:46:01.672750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:17.866 [2024-11-21 01:46:01.672757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.866 [2024-11-21 01:46:01.672766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:17.866 [2024-11-21 01:46:01.672774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:17.866 [2024-11-21 01:46:01.672784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.866 [2024-11-21 01:46:01.672791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:17.866 [2024-11-21 01:46:01.672800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:17.866 [2024-11-21 01:46:01.672807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.866 [2024-11-21 01:46:01.672816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:17.866 [2024-11-21 01:46:01.672823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:17.866 [2024-11-21 01:46:01.672835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.866 [2024-11-21 01:46:01.672842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:17.866 [2024-11-21 01:46:01.672851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:17.866 [2024-11-21 01:46:01.672858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.866 [2024-11-21 01:46:01.672866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:17.866 [2024-11-21 01:46:01.672873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:17.866 [2024-11-21 01:46:01.672885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.866 [2024-11-21 01:46:01.672892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:17.866 [2024-11-21 01:46:01.672904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:17.866 [2024-11-21 01:46:01.672912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.866 [2024-11-21 01:46:01.672922] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:17.866 [2024-11-21 01:46:01.672932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:17.866 [2024-11-21 01:46:01.672945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.866 [2024-11-21 01:46:01.672952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.866 [2024-11-21 01:46:01.672962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:17.866 [2024-11-21 01:46:01.672971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:17.866 [2024-11-21 01:46:01.672980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:17.866 [2024-11-21 01:46:01.672986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:17.866 [2024-11-21 01:46:01.672994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:17.866 [2024-11-21 01:46:01.673001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:17.866 [2024-11-21 01:46:01.673012] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:17.866 [2024-11-21 01:46:01.673024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.866 [2024-11-21 01:46:01.673038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:17.866 [2024-11-21 01:46:01.673045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:17.866 [2024-11-21 01:46:01.673056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:17.866 [2024-11-21 01:46:01.673063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:17.866 [2024-11-21 01:46:01.673074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:17.866 [2024-11-21 01:46:01.673081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:17.866 [2024-11-21 01:46:01.673091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:17.866 [2024-11-21 01:46:01.673098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:17.866 [2024-11-21 01:46:01.673108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:17.866 [2024-11-21 01:46:01.673116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:17.866 [2024-11-21 01:46:01.673125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:17.866 [2024-11-21 01:46:01.673132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:17.866 [2024-11-21 01:46:01.673141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:17.866 [2024-11-21 01:46:01.673149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:17.866 [2024-11-21 01:46:01.673158] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:17.866 [2024-11-21 01:46:01.673167] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.866 [2024-11-21 01:46:01.673180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:17.866 [2024-11-21 01:46:01.673188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:17.866 [2024-11-21 01:46:01.673199] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:17.866 [2024-11-21 01:46:01.673207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:17.866 [2024-11-21 01:46:01.673216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.866 [2024-11-21 01:46:01.673224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:17.866 [2024-11-21 01:46:01.673235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:20:17.867 [2024-11-21 01:46:01.673242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.867 [2024-11-21 01:46:01.711740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.867 [2024-11-21 01:46:01.711787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.867 [2024-11-21 01:46:01.711802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.396 ms 00:20:17.867 [2024-11-21 01:46:01.711810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.867 [2024-11-21 01:46:01.711972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.867 [2024-11-21 01:46:01.711985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:17.867 [2024-11-21 01:46:01.711996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:17.867 [2024-11-21 01:46:01.712005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.867 [2024-11-21 01:46:01.751742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.867 [2024-11-21 01:46:01.751787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.867 [2024-11-21 01:46:01.751806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.707 ms 00:20:17.867 [2024-11-21 01:46:01.751815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.867 [2024-11-21 01:46:01.751935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.867 [2024-11-21 01:46:01.751947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.867 [2024-11-21 01:46:01.751959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:17.867 [2024-11-21 01:46:01.751967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.867 [2024-11-21 01:46:01.752674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.867 [2024-11-21 01:46:01.752710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.867 [2024-11-21 01:46:01.752727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:20:17.867 [2024-11-21 01:46:01.752736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.867 [2024-11-21 01:46:01.752908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.867 [2024-11-21 01:46:01.752920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.867 [2024-11-21 01:46:01.752931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:20:17.867 [2024-11-21 01:46:01.752939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.867 [2024-11-21 01:46:01.774069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.867 [2024-11-21 01:46:01.774104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.867 [2024-11-21 01:46:01.774120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.101 ms 00:20:17.867 [2024-11-21 01:46:01.774130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.867 [2024-11-21 01:46:01.789458] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:17.867 [2024-11-21 01:46:01.789508] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:17.867 [2024-11-21 01:46:01.789525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.867 [2024-11-21 01:46:01.789535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:17.867 [2024-11-21 01:46:01.789549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.263 ms 00:20:17.867 [2024-11-21 01:46:01.789557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.867 [2024-11-21 01:46:01.816194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.867 [2024-11-21 01:46:01.816246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:17.867 [2024-11-21 01:46:01.816263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.509 ms 00:20:17.867 [2024-11-21 01:46:01.816273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.129 [2024-11-21 01:46:01.829637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.129 [2024-11-21 01:46:01.829681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:18.129 [2024-11-21 01:46:01.829699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.280 ms 00:20:18.129 [2024-11-21 01:46:01.829707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.129 [2024-11-21 01:46:01.842556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.129 [2024-11-21 01:46:01.842598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:18.129 [2024-11-21 01:46:01.842636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.755 ms 00:20:18.129 [2024-11-21 01:46:01.842644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.129 [2024-11-21 01:46:01.843350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.129 [2024-11-21 01:46:01.843382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:18.129 [2024-11-21 01:46:01.843403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:20:18.129 [2024-11-21 01:46:01.843411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.129 [2024-11-21 01:46:01.924695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.129 [2024-11-21 01:46:01.924759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:18.129 [2024-11-21 01:46:01.924780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.251 ms 00:20:18.129 [2024-11-21 01:46:01.924789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.129 [2024-11-21 01:46:01.936381] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:18.129 [2024-11-21 01:46:01.960699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.129 [2024-11-21 01:46:01.960756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:18.129 [2024-11-21 01:46:01.960774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.808 ms 00:20:18.129 [2024-11-21 01:46:01.960785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.129 [2024-11-21 01:46:01.960887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.129 [2024-11-21 01:46:01.960902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:18.129 [2024-11-21 01:46:01.960912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:18.129 [2024-11-21 01:46:01.960923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.129 [2024-11-21 01:46:01.960993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.129 [2024-11-21 01:46:01.961006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:18.129 [2024-11-21 01:46:01.961016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:18.129 [2024-11-21 01:46:01.961026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.129 [2024-11-21 01:46:01.961057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.129 [2024-11-21 01:46:01.961070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:18.129 [2024-11-21 01:46:01.961079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:18.129 [2024-11-21 01:46:01.961093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.129 [2024-11-21 01:46:01.961137] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:18.129 [2024-11-21 01:46:01.961153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.129 [2024-11-21 01:46:01.961162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:18.129 [2024-11-21 01:46:01.961177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:18.129 [2024-11-21 01:46:01.961185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.129 [2024-11-21 01:46:01.988050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.129 [2024-11-21 01:46:01.988251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:18.129 [2024-11-21 01:46:01.988279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.830 ms 00:20:18.129 [2024-11-21 01:46:01.988289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.129 [2024-11-21 01:46:01.988409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.129 [2024-11-21 01:46:01.988421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:18.129 [2024-11-21 01:46:01.988433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:18.129 [2024-11-21 01:46:01.988447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.129 [2024-11-21 01:46:01.989889] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:18.129 [2024-11-21 01:46:01.993406] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 357.273 ms, result 0 00:20:18.129 [2024-11-21 01:46:01.995594] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:18.129 Some configs were skipped because the RPC state that can call them passed over. 00:20:18.129 01:46:02 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:18.391 [2024-11-21 01:46:02.244374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.391 [2024-11-21 01:46:02.244563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:18.391 [2024-11-21 01:46:02.244653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.187 ms 00:20:18.391 [2024-11-21 01:46:02.244684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.391 [2024-11-21 01:46:02.244741] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.554 ms, result 0 00:20:18.391 true 00:20:18.391 01:46:02 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:18.651 [2024-11-21 01:46:02.460129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.651 [2024-11-21 01:46:02.460309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:18.651 [2024-11-21 01:46:02.460376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.669 ms 00:20:18.651 [2024-11-21 01:46:02.460400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.651 [2024-11-21 01:46:02.460462] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.003 ms, result 0 00:20:18.651 true 00:20:18.651 01:46:02 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76566 00:20:18.651 01:46:02 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76566 ']' 00:20:18.651 01:46:02 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76566 00:20:18.651 01:46:02 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:18.651 01:46:02 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.651 01:46:02 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76566 00:20:18.651 killing process with pid 76566 00:20:18.651 01:46:02 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:18.651 01:46:02 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:18.651 01:46:02 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76566' 00:20:18.651 01:46:02 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76566 00:20:18.651 01:46:02 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76566 00:20:19.596 [2024-11-21 01:46:03.328132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.596 [2024-11-21 01:46:03.328216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:19.596 [2024-11-21 01:46:03.328233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:19.596 [2024-11-21 01:46:03.328244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.596 [2024-11-21 01:46:03.328269] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:19.596 [2024-11-21 01:46:03.331608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.596 [2024-11-21 01:46:03.331671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:19.596 [2024-11-21 01:46:03.331689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.316 ms 00:20:19.596 [2024-11-21 01:46:03.331697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.596 [2024-11-21 01:46:03.332038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.596 [2024-11-21 01:46:03.332051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:19.596 [2024-11-21 01:46:03.332065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:20:19.596 [2024-11-21 01:46:03.332073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.596 [2024-11-21 01:46:03.338326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.596 [2024-11-21 01:46:03.338373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:19.596 [2024-11-21 01:46:03.338390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.227 ms 00:20:19.596 [2024-11-21 01:46:03.338399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.596 [2024-11-21 01:46:03.345403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.596 [2024-11-21 01:46:03.345446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:19.596 [2024-11-21 01:46:03.345460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.949 ms 00:20:19.596 [2024-11-21 01:46:03.345469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.596 [2024-11-21 01:46:03.356650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.596 [2024-11-21 01:46:03.356930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:19.596 [2024-11-21 01:46:03.356960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.093 ms 00:20:19.596 [2024-11-21 01:46:03.356977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.596 [2024-11-21 01:46:03.367528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.596 [2024-11-21 01:46:03.367580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:19.596 [2024-11-21 01:46:03.367598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.338 ms 00:20:19.596 [2024-11-21 01:46:03.367606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.596 [2024-11-21 01:46:03.367768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.596 [2024-11-21 01:46:03.367780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:19.596 [2024-11-21 01:46:03.367820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:20:19.596 [2024-11-21 01:46:03.367831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.596 [2024-11-21 01:46:03.379875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.596 [2024-11-21 01:46:03.379919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:19.596 [2024-11-21 01:46:03.379932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.016 ms 00:20:19.596 [2024-11-21 01:46:03.379939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.596 [2024-11-21 01:46:03.391025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.597 [2024-11-21 01:46:03.391216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:19.597 [2024-11-21 01:46:03.391244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.017 ms 00:20:19.597 [2024-11-21 01:46:03.391252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.597 [2024-11-21 01:46:03.401824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.597 [2024-11-21 01:46:03.401989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:19.597 [2024-11-21 01:46:03.402015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.507 ms 00:20:19.597 [2024-11-21 01:46:03.402022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.597 [2024-11-21 01:46:03.412656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.597 [2024-11-21 01:46:03.412838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:19.597 [2024-11-21 01:46:03.412863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.378 ms 00:20:19.597 [2024-11-21 01:46:03.412870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.597 [2024-11-21 01:46:03.413023] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:19.597 [2024-11-21 01:46:03.413060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:19.597 [2024-11-21 01:46:03.413904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.413911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.413921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.413928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.413939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.413946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.413960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.413968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.413978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.413985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.413995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.414002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.414013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.414022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.414031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.414045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.414061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.414068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.414077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.414085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.414099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:19.598 [2024-11-21 01:46:03.414116] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:19.598 [2024-11-21 01:46:03.414132] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 45bfb092-704a-4e18-9717-d1701cdaabcd 00:20:19.598 [2024-11-21 01:46:03.414148] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:19.598 [2024-11-21 01:46:03.414162] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:19.598 [2024-11-21 01:46:03.414170] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:19.598 [2024-11-21 01:46:03.414181] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:19.598 [2024-11-21 01:46:03.414189] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:19.598 [2024-11-21 01:46:03.414199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:19.598 [2024-11-21 01:46:03.414206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:19.598 [2024-11-21 01:46:03.414215] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:19.598 [2024-11-21 01:46:03.414221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:19.598 [2024-11-21 01:46:03.414231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.598 [2024-11-21 01:46:03.414240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:19.598 [2024-11-21 01:46:03.414252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.219 ms 00:20:19.598 [2024-11-21 01:46:03.414259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.598 [2024-11-21 01:46:03.429083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.598 [2024-11-21 01:46:03.429126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:19.598 [2024-11-21 01:46:03.429144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.780 ms 00:20:19.598 [2024-11-21 01:46:03.429152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.598 [2024-11-21 01:46:03.429684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.598 [2024-11-21 01:46:03.429711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:19.598 [2024-11-21 01:46:03.429725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:20:19.598 [2024-11-21 01:46:03.429736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.598 [2024-11-21 01:46:03.482879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.598 [2024-11-21 01:46:03.482926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:19.598 [2024-11-21 01:46:03.482941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.598 [2024-11-21 01:46:03.482950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.598 [2024-11-21 01:46:03.483056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.598 [2024-11-21 01:46:03.483066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:19.598 [2024-11-21 01:46:03.483077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.598 [2024-11-21 01:46:03.483089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.598 [2024-11-21 01:46:03.483149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.598 [2024-11-21 01:46:03.483161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:19.598 [2024-11-21 01:46:03.483174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.598 [2024-11-21 01:46:03.483183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.598 [2024-11-21 01:46:03.483206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.598 [2024-11-21 01:46:03.483216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:19.598 [2024-11-21 01:46:03.483228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.598 [2024-11-21 01:46:03.483238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.860 [2024-11-21 01:46:03.575490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.860 [2024-11-21 01:46:03.575546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:19.860 [2024-11-21 01:46:03.575563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.860 [2024-11-21 01:46:03.575572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.860 [2024-11-21 01:46:03.650193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.860 [2024-11-21 01:46:03.650250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:19.860 [2024-11-21 01:46:03.650267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.860 [2024-11-21 01:46:03.650280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.860 [2024-11-21 01:46:03.650369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.860 [2024-11-21 01:46:03.650381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:19.860 [2024-11-21 01:46:03.650396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.860 [2024-11-21 01:46:03.650405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.860 [2024-11-21 01:46:03.650443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.860 [2024-11-21 01:46:03.650454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:19.860 [2024-11-21 01:46:03.650466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.860 [2024-11-21 01:46:03.650474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.860 [2024-11-21 01:46:03.650587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.860 [2024-11-21 01:46:03.650598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:19.860 [2024-11-21 01:46:03.650631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.860 [2024-11-21 01:46:03.650640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.860 [2024-11-21 01:46:03.650685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.860 [2024-11-21 01:46:03.650697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:19.860 [2024-11-21 01:46:03.650709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.860 [2024-11-21 01:46:03.650717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.860 [2024-11-21 01:46:03.650791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.860 [2024-11-21 01:46:03.650806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:19.860 [2024-11-21 01:46:03.650819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.860 [2024-11-21 01:46:03.650827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.860 [2024-11-21 01:46:03.650890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.860 [2024-11-21 01:46:03.650901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:19.860 [2024-11-21 01:46:03.650913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.860 [2024-11-21 01:46:03.650923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.860 [2024-11-21 01:46:03.651116] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 322.940 ms, result 0 00:20:20.802 01:46:04 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:20.802 01:46:04 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:20.802 [2024-11-21 01:46:04.458562] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:20:20.802 [2024-11-21 01:46:04.458745] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76624 ] 00:20:20.802 [2024-11-21 01:46:04.621095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.802 [2024-11-21 01:46:04.742950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.376 [2024-11-21 01:46:05.033704] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:21.376 [2024-11-21 01:46:05.034063] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:21.376 [2024-11-21 01:46:05.207176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.376 [2024-11-21 01:46:05.207253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:21.376 [2024-11-21 01:46:05.207275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:21.376 [2024-11-21 01:46:05.207290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.376 [2024-11-21 01:46:05.210865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.376 [2024-11-21 01:46:05.211070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:21.376 [2024-11-21 01:46:05.211475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.547 ms 00:20:21.376 [2024-11-21 01:46:05.211508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.376 [2024-11-21 01:46:05.212067] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:21.376 [2024-11-21 01:46:05.212986] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:21.376 [2024-11-21 01:46:05.213037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.376 [2024-11-21 01:46:05.213047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:21.376 [2024-11-21 01:46:05.213059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:20:21.376 [2024-11-21 01:46:05.213068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.376 [2024-11-21 01:46:05.215454] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:21.376 [2024-11-21 01:46:05.231034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.376 [2024-11-21 01:46:05.231248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:21.376 [2024-11-21 01:46:05.231273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.581 ms 00:20:21.376 [2024-11-21 01:46:05.231283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.376 [2024-11-21 01:46:05.231702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.376 [2024-11-21 01:46:05.231744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:21.376 [2024-11-21 01:46:05.231759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:21.376 [2024-11-21 01:46:05.231770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.376 [2024-11-21 01:46:05.243707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.376 [2024-11-21 01:46:05.243757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:21.376 [2024-11-21 01:46:05.243770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.881 ms 00:20:21.376 [2024-11-21 01:46:05.243780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.376 [2024-11-21 01:46:05.243924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.376 [2024-11-21 01:46:05.243938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:21.376 [2024-11-21 01:46:05.243949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:20:21.376 [2024-11-21 01:46:05.243958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.376 [2024-11-21 01:46:05.243989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.376 [2024-11-21 01:46:05.244001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:21.376 [2024-11-21 01:46:05.244012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:21.376 [2024-11-21 01:46:05.244022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.376 [2024-11-21 01:46:05.244047] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:21.376 [2024-11-21 01:46:05.248756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.376 [2024-11-21 01:46:05.248800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:21.376 [2024-11-21 01:46:05.248812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.715 ms 00:20:21.376 [2024-11-21 01:46:05.248821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.376 [2024-11-21 01:46:05.248889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.376 [2024-11-21 01:46:05.248900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:21.376 [2024-11-21 01:46:05.248910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:21.376 [2024-11-21 01:46:05.248918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.376 [2024-11-21 01:46:05.248939] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:21.376 [2024-11-21 01:46:05.248967] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:21.376 [2024-11-21 01:46:05.249009] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:21.376 [2024-11-21 01:46:05.249026] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:21.376 [2024-11-21 01:46:05.249141] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:21.376 [2024-11-21 01:46:05.249155] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:21.376 [2024-11-21 01:46:05.249167] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:21.376 [2024-11-21 01:46:05.249179] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:21.376 [2024-11-21 01:46:05.249194] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:21.376 [2024-11-21 01:46:05.249204] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:21.376 [2024-11-21 01:46:05.249213] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:21.376 [2024-11-21 01:46:05.249221] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:21.376 [2024-11-21 01:46:05.249230] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:21.376 [2024-11-21 01:46:05.249238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.376 [2024-11-21 01:46:05.249249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:21.376 [2024-11-21 01:46:05.249257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:20:21.376 [2024-11-21 01:46:05.249265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.376 [2024-11-21 01:46:05.249375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.376 [2024-11-21 01:46:05.249386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:21.376 [2024-11-21 01:46:05.249398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:21.376 [2024-11-21 01:46:05.249406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.376 [2024-11-21 01:46:05.249518] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:21.376 [2024-11-21 01:46:05.249533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:21.376 [2024-11-21 01:46:05.249543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:21.376 [2024-11-21 01:46:05.249552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.376 [2024-11-21 01:46:05.249561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:21.376 [2024-11-21 01:46:05.249569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:21.376 [2024-11-21 01:46:05.249577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:21.376 [2024-11-21 01:46:05.249586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:21.376 [2024-11-21 01:46:05.249594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:21.376 [2024-11-21 01:46:05.249603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:21.376 [2024-11-21 01:46:05.249610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:21.376 [2024-11-21 01:46:05.249645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:21.376 [2024-11-21 01:46:05.249654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:21.376 [2024-11-21 01:46:05.249671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:21.376 [2024-11-21 01:46:05.249678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:21.376 [2024-11-21 01:46:05.249686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.376 [2024-11-21 01:46:05.249696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:21.376 [2024-11-21 01:46:05.249704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:21.376 [2024-11-21 01:46:05.249717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.376 [2024-11-21 01:46:05.249725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:21.376 [2024-11-21 01:46:05.249733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:21.376 [2024-11-21 01:46:05.249741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:21.377 [2024-11-21 01:46:05.249748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:21.377 [2024-11-21 01:46:05.249755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:21.377 [2024-11-21 01:46:05.249762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:21.377 [2024-11-21 01:46:05.249798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:21.377 [2024-11-21 01:46:05.249808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:21.377 [2024-11-21 01:46:05.249818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:21.377 [2024-11-21 01:46:05.249828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:21.377 [2024-11-21 01:46:05.249841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:21.377 [2024-11-21 01:46:05.249853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:21.377 [2024-11-21 01:46:05.249864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:21.377 [2024-11-21 01:46:05.249879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:21.377 [2024-11-21 01:46:05.249892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:21.377 [2024-11-21 01:46:05.249903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:21.377 [2024-11-21 01:46:05.249915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:21.377 [2024-11-21 01:46:05.249929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:21.377 [2024-11-21 01:46:05.249941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:21.377 [2024-11-21 01:46:05.249954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:21.377 [2024-11-21 01:46:05.249965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.377 [2024-11-21 01:46:05.249978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:21.377 [2024-11-21 01:46:05.249990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:21.377 [2024-11-21 01:46:05.250000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.377 [2024-11-21 01:46:05.250010] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:21.377 [2024-11-21 01:46:05.250023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:21.377 [2024-11-21 01:46:05.250038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:21.377 [2024-11-21 01:46:05.250056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.377 [2024-11-21 01:46:05.250071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:21.377 [2024-11-21 01:46:05.250085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:21.377 [2024-11-21 01:46:05.250098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:21.377 [2024-11-21 01:46:05.250113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:21.377 [2024-11-21 01:46:05.250126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:21.377 [2024-11-21 01:46:05.250137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:21.377 [2024-11-21 01:46:05.250152] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:21.377 [2024-11-21 01:46:05.250172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:21.377 [2024-11-21 01:46:05.250189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:21.377 [2024-11-21 01:46:05.250213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:21.377 [2024-11-21 01:46:05.250228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:21.377 [2024-11-21 01:46:05.250246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:21.377 [2024-11-21 01:46:05.250259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:21.377 [2024-11-21 01:46:05.250273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:21.377 [2024-11-21 01:46:05.250288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:21.377 [2024-11-21 01:46:05.250310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:21.377 [2024-11-21 01:46:05.250323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:21.377 [2024-11-21 01:46:05.250332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:21.377 [2024-11-21 01:46:05.250340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:21.377 [2024-11-21 01:46:05.250348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:21.377 [2024-11-21 01:46:05.250356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:21.377 [2024-11-21 01:46:05.250366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:21.377 [2024-11-21 01:46:05.250373] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:21.377 [2024-11-21 01:46:05.250383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:21.377 [2024-11-21 01:46:05.250393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:21.377 [2024-11-21 01:46:05.250402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:21.377 [2024-11-21 01:46:05.250410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:21.377 [2024-11-21 01:46:05.250417] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:21.377 [2024-11-21 01:46:05.250427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.377 [2024-11-21 01:46:05.250435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:21.377 [2024-11-21 01:46:05.250451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:20:21.377 [2024-11-21 01:46:05.250459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.377 [2024-11-21 01:46:05.289521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.377 [2024-11-21 01:46:05.289779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:21.377 [2024-11-21 01:46:05.289801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.987 ms 00:20:21.377 [2024-11-21 01:46:05.289813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.377 [2024-11-21 01:46:05.289968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.377 [2024-11-21 01:46:05.289986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:21.377 [2024-11-21 01:46:05.289995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:21.377 [2024-11-21 01:46:05.290004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.639 [2024-11-21 01:46:05.340206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.639 [2024-11-21 01:46:05.340265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:21.639 [2024-11-21 01:46:05.340279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.177 ms 00:20:21.639 [2024-11-21 01:46:05.340293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.639 [2024-11-21 01:46:05.340417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.639 [2024-11-21 01:46:05.340430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:21.639 [2024-11-21 01:46:05.340443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:21.639 [2024-11-21 01:46:05.340453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.639 [2024-11-21 01:46:05.341237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.639 [2024-11-21 01:46:05.341296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:21.639 [2024-11-21 01:46:05.341309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:20:21.639 [2024-11-21 01:46:05.341330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.639 [2024-11-21 01:46:05.341506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.639 [2024-11-21 01:46:05.341518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:21.639 [2024-11-21 01:46:05.341528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:20:21.639 [2024-11-21 01:46:05.341537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.639 [2024-11-21 01:46:05.360962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.639 [2024-11-21 01:46:05.361153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:21.639 [2024-11-21 01:46:05.361225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.401 ms 00:20:21.639 [2024-11-21 01:46:05.361250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.639 [2024-11-21 01:46:05.377195] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:21.639 [2024-11-21 01:46:05.377417] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:21.639 [2024-11-21 01:46:05.377439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.639 [2024-11-21 01:46:05.377451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:21.639 [2024-11-21 01:46:05.377462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.015 ms 00:20:21.639 [2024-11-21 01:46:05.377471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.639 [2024-11-21 01:46:05.404848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.639 [2024-11-21 01:46:05.404914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:21.639 [2024-11-21 01:46:05.404929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.169 ms 00:20:21.639 [2024-11-21 01:46:05.404938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.639 [2024-11-21 01:46:05.418477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.639 [2024-11-21 01:46:05.418541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:21.639 [2024-11-21 01:46:05.418555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.433 ms 00:20:21.639 [2024-11-21 01:46:05.418562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.639 [2024-11-21 01:46:05.431784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.639 [2024-11-21 01:46:05.431978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:21.639 [2024-11-21 01:46:05.432000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.105 ms 00:20:21.639 [2024-11-21 01:46:05.432007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.639 [2024-11-21 01:46:05.432708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.639 [2024-11-21 01:46:05.432741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:21.639 [2024-11-21 01:46:05.432753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:20:21.639 [2024-11-21 01:46:05.432762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.639 [2024-11-21 01:46:05.508791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.639 [2024-11-21 01:46:05.508853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:21.639 [2024-11-21 01:46:05.508869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.997 ms 00:20:21.639 [2024-11-21 01:46:05.508879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.639 [2024-11-21 01:46:05.520944] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:21.639 [2024-11-21 01:46:05.547191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.639 [2024-11-21 01:46:05.547436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:21.639 [2024-11-21 01:46:05.547459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.183 ms 00:20:21.639 [2024-11-21 01:46:05.547469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.639 [2024-11-21 01:46:05.547609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.639 [2024-11-21 01:46:05.547647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:21.639 [2024-11-21 01:46:05.547663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:21.639 [2024-11-21 01:46:05.547672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.639 [2024-11-21 01:46:05.547744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.639 [2024-11-21 01:46:05.547758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:21.639 [2024-11-21 01:46:05.547768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:21.639 [2024-11-21 01:46:05.547777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.639 [2024-11-21 01:46:05.547814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.639 [2024-11-21 01:46:05.547829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:21.639 [2024-11-21 01:46:05.547839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:21.639 [2024-11-21 01:46:05.547847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.639 [2024-11-21 01:46:05.547892] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:21.639 [2024-11-21 01:46:05.547905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.639 [2024-11-21 01:46:05.547914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:21.639 [2024-11-21 01:46:05.547923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:21.639 [2024-11-21 01:46:05.547932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.639 [2024-11-21 01:46:05.577003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.639 [2024-11-21 01:46:05.577059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:21.639 [2024-11-21 01:46:05.577073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.043 ms 00:20:21.639 [2024-11-21 01:46:05.577083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.639 [2024-11-21 01:46:05.577248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.639 [2024-11-21 01:46:05.577262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:21.639 [2024-11-21 01:46:05.577274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:21.639 [2024-11-21 01:46:05.577316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.639 [2024-11-21 01:46:05.578704] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:21.639 [2024-11-21 01:46:05.582364] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 371.111 ms, result 0 00:20:21.639 [2024-11-21 01:46:05.583872] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:21.901 [2024-11-21 01:46:05.598075] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:22.843  [2024-11-21T01:46:07.743Z] Copying: 16/256 [MB] (16 MBps) [2024-11-21T01:46:08.687Z] Copying: 29/256 [MB] (13 MBps) [2024-11-21T01:46:09.629Z] Copying: 43/256 [MB] (14 MBps) [2024-11-21T01:46:11.017Z] Copying: 58/256 [MB] (14 MBps) [2024-11-21T01:46:11.961Z] Copying: 75/256 [MB] (17 MBps) [2024-11-21T01:46:12.905Z] Copying: 86/256 [MB] (10 MBps) [2024-11-21T01:46:13.849Z] Copying: 96/256 [MB] (10 MBps) [2024-11-21T01:46:14.793Z] Copying: 107/256 [MB] (10 MBps) [2024-11-21T01:46:15.741Z] Copying: 117/256 [MB] (10 MBps) [2024-11-21T01:46:16.746Z] Copying: 128/256 [MB] (10 MBps) [2024-11-21T01:46:17.691Z] Copying: 138/256 [MB] (10 MBps) [2024-11-21T01:46:18.637Z] Copying: 148/256 [MB] (10 MBps) [2024-11-21T01:46:20.025Z] Copying: 159/256 [MB] (10 MBps) [2024-11-21T01:46:20.970Z] Copying: 169/256 [MB] (10 MBps) [2024-11-21T01:46:21.914Z] Copying: 180/256 [MB] (10 MBps) [2024-11-21T01:46:22.859Z] Copying: 196/256 [MB] (16 MBps) [2024-11-21T01:46:23.804Z] Copying: 212/256 [MB] (15 MBps) [2024-11-21T01:46:24.747Z] Copying: 229/256 [MB] (17 MBps) [2024-11-21T01:46:25.319Z] Copying: 246/256 [MB] (16 MBps) [2024-11-21T01:46:25.319Z] Copying: 256/256 [MB] (average 13 MBps)[2024-11-21 01:46:25.197634] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:41.362 [2024-11-21 01:46:25.207217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.362 [2024-11-21 01:46:25.207255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:41.362 [2024-11-21 01:46:25.207269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:41.362 [2024-11-21 01:46:25.207284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.362 [2024-11-21 01:46:25.207305] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:41.362 [2024-11-21 01:46:25.210147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.362 [2024-11-21 01:46:25.210299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:41.362 [2024-11-21 01:46:25.210317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.829 ms 00:20:41.362 [2024-11-21 01:46:25.210325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.362 [2024-11-21 01:46:25.210584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.362 [2024-11-21 01:46:25.210594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:41.362 [2024-11-21 01:46:25.210603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:20:41.362 [2024-11-21 01:46:25.210628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.362 [2024-11-21 01:46:25.214361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.362 [2024-11-21 01:46:25.214389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:41.362 [2024-11-21 01:46:25.214399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.711 ms 00:20:41.362 [2024-11-21 01:46:25.214407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.362 [2024-11-21 01:46:25.221298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.362 [2024-11-21 01:46:25.221436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:41.362 [2024-11-21 01:46:25.221454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.874 ms 00:20:41.362 [2024-11-21 01:46:25.221463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.362 [2024-11-21 01:46:25.246301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.362 [2024-11-21 01:46:25.246343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:41.362 [2024-11-21 01:46:25.246355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.779 ms 00:20:41.362 [2024-11-21 01:46:25.246363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.362 [2024-11-21 01:46:25.267821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.362 [2024-11-21 01:46:25.268019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:41.362 [2024-11-21 01:46:25.268046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.410 ms 00:20:41.362 [2024-11-21 01:46:25.268061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.362 [2024-11-21 01:46:25.268270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.362 [2024-11-21 01:46:25.268285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:41.362 [2024-11-21 01:46:25.268296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:41.362 [2024-11-21 01:46:25.268304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.362 [2024-11-21 01:46:25.293652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.362 [2024-11-21 01:46:25.293699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:41.362 [2024-11-21 01:46:25.293712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.319 ms 00:20:41.362 [2024-11-21 01:46:25.293720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.625 [2024-11-21 01:46:25.319116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.625 [2024-11-21 01:46:25.319301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:41.625 [2024-11-21 01:46:25.319321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.344 ms 00:20:41.625 [2024-11-21 01:46:25.319329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.625 [2024-11-21 01:46:25.344115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.625 [2024-11-21 01:46:25.344157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:41.625 [2024-11-21 01:46:25.344169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.728 ms 00:20:41.625 [2024-11-21 01:46:25.344176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.625 [2024-11-21 01:46:25.368727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.625 [2024-11-21 01:46:25.368768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:41.625 [2024-11-21 01:46:25.368779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.468 ms 00:20:41.625 [2024-11-21 01:46:25.368786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.625 [2024-11-21 01:46:25.368852] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:41.625 [2024-11-21 01:46:25.368872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:41.625 [2024-11-21 01:46:25.368884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:41.625 [2024-11-21 01:46:25.368893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:41.625 [2024-11-21 01:46:25.368901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:41.625 [2024-11-21 01:46:25.368909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:41.625 [2024-11-21 01:46:25.368918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:41.625 [2024-11-21 01:46:25.368926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:41.625 [2024-11-21 01:46:25.368936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:41.625 [2024-11-21 01:46:25.368944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:41.625 [2024-11-21 01:46:25.368953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:41.625 [2024-11-21 01:46:25.368962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:41.625 [2024-11-21 01:46:25.368971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:41.625 [2024-11-21 01:46:25.368979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:41.625 [2024-11-21 01:46:25.368988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:41.625 [2024-11-21 01:46:25.368995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:41.625 [2024-11-21 01:46:25.369004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:41.625 [2024-11-21 01:46:25.369011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:41.625 [2024-11-21 01:46:25.369019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:41.625 [2024-11-21 01:46:25.369027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:41.625 [2024-11-21 01:46:25.369035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:41.626 [2024-11-21 01:46:25.369759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:41.627 [2024-11-21 01:46:25.369767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:41.627 [2024-11-21 01:46:25.369775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:41.627 [2024-11-21 01:46:25.369821] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:41.627 [2024-11-21 01:46:25.369831] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 45bfb092-704a-4e18-9717-d1701cdaabcd 00:20:41.627 [2024-11-21 01:46:25.369840] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:41.627 [2024-11-21 01:46:25.369850] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:41.627 [2024-11-21 01:46:25.369858] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:41.627 [2024-11-21 01:46:25.369866] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:41.627 [2024-11-21 01:46:25.369875] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:41.627 [2024-11-21 01:46:25.369883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:41.627 [2024-11-21 01:46:25.369892] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:41.627 [2024-11-21 01:46:25.369898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:41.627 [2024-11-21 01:46:25.369905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:41.627 [2024-11-21 01:46:25.369913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.627 [2024-11-21 01:46:25.369926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:41.627 [2024-11-21 01:46:25.369939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:20:41.627 [2024-11-21 01:46:25.369948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.627 [2024-11-21 01:46:25.384355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.627 [2024-11-21 01:46:25.384394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:41.627 [2024-11-21 01:46:25.384407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.374 ms 00:20:41.627 [2024-11-21 01:46:25.384415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.627 [2024-11-21 01:46:25.384895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:41.627 [2024-11-21 01:46:25.384917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:41.627 [2024-11-21 01:46:25.384928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:20:41.627 [2024-11-21 01:46:25.384937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.627 [2024-11-21 01:46:25.426656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.627 [2024-11-21 01:46:25.426703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:41.627 [2024-11-21 01:46:25.426716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.627 [2024-11-21 01:46:25.426725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.627 [2024-11-21 01:46:25.426829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.627 [2024-11-21 01:46:25.426839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:41.627 [2024-11-21 01:46:25.426850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.627 [2024-11-21 01:46:25.426858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.627 [2024-11-21 01:46:25.426920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.627 [2024-11-21 01:46:25.426931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:41.627 [2024-11-21 01:46:25.426941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.627 [2024-11-21 01:46:25.426949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.627 [2024-11-21 01:46:25.426968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.627 [2024-11-21 01:46:25.426981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:41.627 [2024-11-21 01:46:25.426989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.627 [2024-11-21 01:46:25.426997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.627 [2024-11-21 01:46:25.518299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.627 [2024-11-21 01:46:25.518357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:41.627 [2024-11-21 01:46:25.518371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.627 [2024-11-21 01:46:25.518381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.889 [2024-11-21 01:46:25.591554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.889 [2024-11-21 01:46:25.591655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:41.889 [2024-11-21 01:46:25.591688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.889 [2024-11-21 01:46:25.591698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.889 [2024-11-21 01:46:25.591762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.889 [2024-11-21 01:46:25.591774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:41.889 [2024-11-21 01:46:25.591784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.889 [2024-11-21 01:46:25.591792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.889 [2024-11-21 01:46:25.591830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.889 [2024-11-21 01:46:25.591839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:41.889 [2024-11-21 01:46:25.591853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.889 [2024-11-21 01:46:25.591863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.889 [2024-11-21 01:46:25.591975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.889 [2024-11-21 01:46:25.591995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:41.889 [2024-11-21 01:46:25.592004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.889 [2024-11-21 01:46:25.592012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.889 [2024-11-21 01:46:25.592049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.889 [2024-11-21 01:46:25.592059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:41.889 [2024-11-21 01:46:25.592067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.889 [2024-11-21 01:46:25.592079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.889 [2024-11-21 01:46:25.592132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.889 [2024-11-21 01:46:25.592142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:41.889 [2024-11-21 01:46:25.592152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.889 [2024-11-21 01:46:25.592161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.889 [2024-11-21 01:46:25.592221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:41.889 [2024-11-21 01:46:25.592233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:41.889 [2024-11-21 01:46:25.592245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:41.889 [2024-11-21 01:46:25.592254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:41.889 [2024-11-21 01:46:25.592447] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 385.198 ms, result 0 00:20:42.833 00:20:42.833 00:20:42.833 01:46:26 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:42.834 01:46:26 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:43.095 01:46:27 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:43.356 [2024-11-21 01:46:27.100684] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:20:43.356 [2024-11-21 01:46:27.100907] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76866 ] 00:20:43.356 [2024-11-21 01:46:27.264668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.618 [2024-11-21 01:46:27.382427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.879 [2024-11-21 01:46:27.705154] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:43.879 [2024-11-21 01:46:27.705241] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:44.142 [2024-11-21 01:46:27.871219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.142 [2024-11-21 01:46:27.871285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:44.142 [2024-11-21 01:46:27.871303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:44.142 [2024-11-21 01:46:27.871312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.142 [2024-11-21 01:46:27.874547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.142 [2024-11-21 01:46:27.874598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:44.142 [2024-11-21 01:46:27.874609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.213 ms 00:20:44.142 [2024-11-21 01:46:27.874637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.142 [2024-11-21 01:46:27.874755] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:44.142 [2024-11-21 01:46:27.875683] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:44.142 [2024-11-21 01:46:27.875722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.142 [2024-11-21 01:46:27.875731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:44.142 [2024-11-21 01:46:27.875741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:20:44.142 [2024-11-21 01:46:27.875749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.143 [2024-11-21 01:46:27.878091] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:44.143 [2024-11-21 01:46:27.893446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.143 [2024-11-21 01:46:27.893503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:44.143 [2024-11-21 01:46:27.893516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.356 ms 00:20:44.143 [2024-11-21 01:46:27.893526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.143 [2024-11-21 01:46:27.893659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.143 [2024-11-21 01:46:27.893674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:44.143 [2024-11-21 01:46:27.893685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:44.143 [2024-11-21 01:46:27.893694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.143 [2024-11-21 01:46:27.905013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.143 [2024-11-21 01:46:27.905056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:44.143 [2024-11-21 01:46:27.905068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.273 ms 00:20:44.143 [2024-11-21 01:46:27.905076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.143 [2024-11-21 01:46:27.905203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.143 [2024-11-21 01:46:27.905214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:44.143 [2024-11-21 01:46:27.905224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:44.143 [2024-11-21 01:46:27.905234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.143 [2024-11-21 01:46:27.905261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.143 [2024-11-21 01:46:27.905274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:44.143 [2024-11-21 01:46:27.905283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:44.143 [2024-11-21 01:46:27.905319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.143 [2024-11-21 01:46:27.905344] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:44.143 [2024-11-21 01:46:27.909913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.143 [2024-11-21 01:46:27.909952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:44.143 [2024-11-21 01:46:27.909963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.577 ms 00:20:44.143 [2024-11-21 01:46:27.909971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.143 [2024-11-21 01:46:27.910030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.143 [2024-11-21 01:46:27.910041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:44.143 [2024-11-21 01:46:27.910051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:44.143 [2024-11-21 01:46:27.910060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.143 [2024-11-21 01:46:27.910080] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:44.143 [2024-11-21 01:46:27.910111] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:44.143 [2024-11-21 01:46:27.910154] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:44.143 [2024-11-21 01:46:27.910172] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:44.143 [2024-11-21 01:46:27.910284] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:44.143 [2024-11-21 01:46:27.910297] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:44.143 [2024-11-21 01:46:27.910309] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:44.143 [2024-11-21 01:46:27.910320] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:44.143 [2024-11-21 01:46:27.910334] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:44.143 [2024-11-21 01:46:27.910343] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:44.143 [2024-11-21 01:46:27.910353] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:44.143 [2024-11-21 01:46:27.910360] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:44.143 [2024-11-21 01:46:27.910369] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:44.143 [2024-11-21 01:46:27.910377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.143 [2024-11-21 01:46:27.910386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:44.143 [2024-11-21 01:46:27.910396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:20:44.143 [2024-11-21 01:46:27.910404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.143 [2024-11-21 01:46:27.910495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.143 [2024-11-21 01:46:27.910512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:44.143 [2024-11-21 01:46:27.910524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:44.143 [2024-11-21 01:46:27.910531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.143 [2024-11-21 01:46:27.910669] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:44.143 [2024-11-21 01:46:27.910683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:44.143 [2024-11-21 01:46:27.910694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:44.143 [2024-11-21 01:46:27.910702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.143 [2024-11-21 01:46:27.910711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:44.143 [2024-11-21 01:46:27.910719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:44.143 [2024-11-21 01:46:27.910727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:44.143 [2024-11-21 01:46:27.910736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:44.143 [2024-11-21 01:46:27.910744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:44.143 [2024-11-21 01:46:27.910752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:44.143 [2024-11-21 01:46:27.910761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:44.143 [2024-11-21 01:46:27.910769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:44.143 [2024-11-21 01:46:27.910777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:44.143 [2024-11-21 01:46:27.910793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:44.143 [2024-11-21 01:46:27.910805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:44.143 [2024-11-21 01:46:27.910813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.143 [2024-11-21 01:46:27.910821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:44.143 [2024-11-21 01:46:27.910829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:44.143 [2024-11-21 01:46:27.910836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.143 [2024-11-21 01:46:27.910843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:44.143 [2024-11-21 01:46:27.910851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:44.143 [2024-11-21 01:46:27.910858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:44.143 [2024-11-21 01:46:27.910865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:44.143 [2024-11-21 01:46:27.910873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:44.143 [2024-11-21 01:46:27.910880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:44.143 [2024-11-21 01:46:27.910887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:44.143 [2024-11-21 01:46:27.910894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:44.143 [2024-11-21 01:46:27.910901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:44.143 [2024-11-21 01:46:27.910908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:44.143 [2024-11-21 01:46:27.910915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:44.143 [2024-11-21 01:46:27.910921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:44.143 [2024-11-21 01:46:27.910929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:44.144 [2024-11-21 01:46:27.910936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:44.144 [2024-11-21 01:46:27.910943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:44.144 [2024-11-21 01:46:27.910949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:44.144 [2024-11-21 01:46:27.910955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:44.144 [2024-11-21 01:46:27.910962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:44.144 [2024-11-21 01:46:27.910969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:44.144 [2024-11-21 01:46:27.910975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:44.144 [2024-11-21 01:46:27.910982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.144 [2024-11-21 01:46:27.910990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:44.144 [2024-11-21 01:46:27.910997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:44.144 [2024-11-21 01:46:27.911004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.144 [2024-11-21 01:46:27.911011] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:44.144 [2024-11-21 01:46:27.911019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:44.144 [2024-11-21 01:46:27.911027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:44.144 [2024-11-21 01:46:27.911039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:44.144 [2024-11-21 01:46:27.911049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:44.144 [2024-11-21 01:46:27.911056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:44.144 [2024-11-21 01:46:27.911063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:44.144 [2024-11-21 01:46:27.911070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:44.144 [2024-11-21 01:46:27.911076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:44.144 [2024-11-21 01:46:27.911083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:44.144 [2024-11-21 01:46:27.911092] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:44.144 [2024-11-21 01:46:27.911103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:44.144 [2024-11-21 01:46:27.911113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:44.144 [2024-11-21 01:46:27.911121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:44.144 [2024-11-21 01:46:27.911128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:44.144 [2024-11-21 01:46:27.911135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:44.144 [2024-11-21 01:46:27.911142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:44.144 [2024-11-21 01:46:27.911149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:44.144 [2024-11-21 01:46:27.911157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:44.144 [2024-11-21 01:46:27.911164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:44.144 [2024-11-21 01:46:27.911172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:44.144 [2024-11-21 01:46:27.911178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:44.144 [2024-11-21 01:46:27.911185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:44.144 [2024-11-21 01:46:27.911191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:44.144 [2024-11-21 01:46:27.911198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:44.144 [2024-11-21 01:46:27.911205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:44.144 [2024-11-21 01:46:27.911213] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:44.144 [2024-11-21 01:46:27.911223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:44.144 [2024-11-21 01:46:27.911231] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:44.144 [2024-11-21 01:46:27.911239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:44.144 [2024-11-21 01:46:27.911246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:44.144 [2024-11-21 01:46:27.911253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:44.144 [2024-11-21 01:46:27.911261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.144 [2024-11-21 01:46:27.911269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:44.144 [2024-11-21 01:46:27.911282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:20:44.144 [2024-11-21 01:46:27.911291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.144 [2024-11-21 01:46:27.949749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.144 [2024-11-21 01:46:27.950915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:44.144 [2024-11-21 01:46:27.951248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.402 ms 00:20:44.144 [2024-11-21 01:46:27.951327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.144 [2024-11-21 01:46:27.951922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.144 [2024-11-21 01:46:27.952144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:44.144 [2024-11-21 01:46:27.952182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:20:44.144 [2024-11-21 01:46:27.952203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.144 [2024-11-21 01:46:27.997868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.144 [2024-11-21 01:46:27.998084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:44.144 [2024-11-21 01:46:27.998998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.599 ms 00:20:44.144 [2024-11-21 01:46:27.999036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.144 [2024-11-21 01:46:27.999182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.144 [2024-11-21 01:46:27.999196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:44.144 [2024-11-21 01:46:27.999207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:44.144 [2024-11-21 01:46:27.999215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.144 [2024-11-21 01:46:27.999796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.144 [2024-11-21 01:46:27.999826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:44.144 [2024-11-21 01:46:27.999838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:20:44.144 [2024-11-21 01:46:27.999852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.144 [2024-11-21 01:46:28.000017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.144 [2024-11-21 01:46:28.000036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:44.144 [2024-11-21 01:46:28.000046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:20:44.144 [2024-11-21 01:46:28.000054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.144 [2024-11-21 01:46:28.016680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.144 [2024-11-21 01:46:28.016727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:44.144 [2024-11-21 01:46:28.016740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.601 ms 00:20:44.144 [2024-11-21 01:46:28.016748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.144 [2024-11-21 01:46:28.031172] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:44.144 [2024-11-21 01:46:28.031365] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:44.144 [2024-11-21 01:46:28.031385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.144 [2024-11-21 01:46:28.031395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:44.144 [2024-11-21 01:46:28.031405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.512 ms 00:20:44.144 [2024-11-21 01:46:28.031413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.144 [2024-11-21 01:46:28.058285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.144 [2024-11-21 01:46:28.058488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:44.144 [2024-11-21 01:46:28.058514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.448 ms 00:20:44.144 [2024-11-21 01:46:28.058524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.144 [2024-11-21 01:46:28.071580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.144 [2024-11-21 01:46:28.071645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:44.144 [2024-11-21 01:46:28.071658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.931 ms 00:20:44.145 [2024-11-21 01:46:28.071667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.145 [2024-11-21 01:46:28.084416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.145 [2024-11-21 01:46:28.084460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:44.145 [2024-11-21 01:46:28.084472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.657 ms 00:20:44.145 [2024-11-21 01:46:28.084481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.145 [2024-11-21 01:46:28.085198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.145 [2024-11-21 01:46:28.085230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:44.145 [2024-11-21 01:46:28.085243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:20:44.145 [2024-11-21 01:46:28.085253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.406 [2024-11-21 01:46:28.151652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.406 [2024-11-21 01:46:28.151863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:44.406 [2024-11-21 01:46:28.151887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.367 ms 00:20:44.406 [2024-11-21 01:46:28.151898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.406 [2024-11-21 01:46:28.163163] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:44.406 [2024-11-21 01:46:28.182672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.406 [2024-11-21 01:46:28.182721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:44.406 [2024-11-21 01:46:28.182734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.679 ms 00:20:44.406 [2024-11-21 01:46:28.182743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.406 [2024-11-21 01:46:28.182848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.406 [2024-11-21 01:46:28.182861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:44.406 [2024-11-21 01:46:28.182871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:44.406 [2024-11-21 01:46:28.182880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.406 [2024-11-21 01:46:28.182940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.406 [2024-11-21 01:46:28.182950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:44.406 [2024-11-21 01:46:28.182960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:44.406 [2024-11-21 01:46:28.182968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.406 [2024-11-21 01:46:28.182997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.406 [2024-11-21 01:46:28.183009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:44.406 [2024-11-21 01:46:28.183018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:44.406 [2024-11-21 01:46:28.183026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.406 [2024-11-21 01:46:28.183064] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:44.406 [2024-11-21 01:46:28.183076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.406 [2024-11-21 01:46:28.183084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:44.406 [2024-11-21 01:46:28.183093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:44.406 [2024-11-21 01:46:28.183100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.406 [2024-11-21 01:46:28.209841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.406 [2024-11-21 01:46:28.210040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:44.406 [2024-11-21 01:46:28.210062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.719 ms 00:20:44.406 [2024-11-21 01:46:28.210071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.406 [2024-11-21 01:46:28.210202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.406 [2024-11-21 01:46:28.210214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:44.406 [2024-11-21 01:46:28.210224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:44.406 [2024-11-21 01:46:28.210233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.406 [2024-11-21 01:46:28.211361] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:44.406 [2024-11-21 01:46:28.214859] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 339.805 ms, result 0 00:20:44.406 [2024-11-21 01:46:28.216185] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:44.406 [2024-11-21 01:46:28.229774] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:44.666  [2024-11-21T01:46:28.623Z] Copying: 4096/4096 [kB] (average 15 MBps)[2024-11-21 01:46:28.497752] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:44.666 [2024-11-21 01:46:28.506749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.666 [2024-11-21 01:46:28.506807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:44.666 [2024-11-21 01:46:28.506820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:44.666 [2024-11-21 01:46:28.506835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.666 [2024-11-21 01:46:28.506859] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:44.666 [2024-11-21 01:46:28.510027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.666 [2024-11-21 01:46:28.510064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:44.666 [2024-11-21 01:46:28.510076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.155 ms 00:20:44.666 [2024-11-21 01:46:28.510084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.666 [2024-11-21 01:46:28.513129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.666 [2024-11-21 01:46:28.513339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:44.666 [2024-11-21 01:46:28.513362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.017 ms 00:20:44.666 [2024-11-21 01:46:28.513371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.666 [2024-11-21 01:46:28.517873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.666 [2024-11-21 01:46:28.517921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:44.666 [2024-11-21 01:46:28.517932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.479 ms 00:20:44.666 [2024-11-21 01:46:28.517940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.667 [2024-11-21 01:46:28.525219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.667 [2024-11-21 01:46:28.525256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:44.667 [2024-11-21 01:46:28.525268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.244 ms 00:20:44.667 [2024-11-21 01:46:28.525275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.667 [2024-11-21 01:46:28.550267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.667 [2024-11-21 01:46:28.550315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:44.667 [2024-11-21 01:46:28.550328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.934 ms 00:20:44.667 [2024-11-21 01:46:28.550336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.667 [2024-11-21 01:46:28.566184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.667 [2024-11-21 01:46:28.566242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:44.667 [2024-11-21 01:46:28.566258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.797 ms 00:20:44.667 [2024-11-21 01:46:28.566267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.667 [2024-11-21 01:46:28.566418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.667 [2024-11-21 01:46:28.566430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:44.667 [2024-11-21 01:46:28.566439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:44.667 [2024-11-21 01:46:28.566446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.667 [2024-11-21 01:46:28.592505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.667 [2024-11-21 01:46:28.592742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:44.667 [2024-11-21 01:46:28.592764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.032 ms 00:20:44.667 [2024-11-21 01:46:28.592771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.667 [2024-11-21 01:46:28.618236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.667 [2024-11-21 01:46:28.618281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:44.667 [2024-11-21 01:46:28.618293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.407 ms 00:20:44.667 [2024-11-21 01:46:28.618300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.929 [2024-11-21 01:46:28.643357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.929 [2024-11-21 01:46:28.643404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:44.929 [2024-11-21 01:46:28.643416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.007 ms 00:20:44.929 [2024-11-21 01:46:28.643423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.929 [2024-11-21 01:46:28.668363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.929 [2024-11-21 01:46:28.668410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:44.929 [2024-11-21 01:46:28.668422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.850 ms 00:20:44.929 [2024-11-21 01:46:28.668429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.929 [2024-11-21 01:46:28.668477] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:44.929 [2024-11-21 01:46:28.668493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:44.929 [2024-11-21 01:46:28.668879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.668886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.668894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.668901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.668909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.668916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.668923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.668930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.668938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.668946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.668954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.668961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.668969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.668976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.668984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:44.930 [2024-11-21 01:46:28.669345] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:44.930 [2024-11-21 01:46:28.669354] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 45bfb092-704a-4e18-9717-d1701cdaabcd 00:20:44.930 [2024-11-21 01:46:28.669362] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:44.930 [2024-11-21 01:46:28.669371] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:44.930 [2024-11-21 01:46:28.669379] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:44.930 [2024-11-21 01:46:28.669387] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:44.930 [2024-11-21 01:46:28.669395] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:44.930 [2024-11-21 01:46:28.669403] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:44.930 [2024-11-21 01:46:28.669412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:44.930 [2024-11-21 01:46:28.669418] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:44.930 [2024-11-21 01:46:28.669424] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:44.930 [2024-11-21 01:46:28.669432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.930 [2024-11-21 01:46:28.669443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:44.930 [2024-11-21 01:46:28.669453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:20:44.930 [2024-11-21 01:46:28.669461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.930 [2024-11-21 01:46:28.682647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.930 [2024-11-21 01:46:28.682828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:44.930 [2024-11-21 01:46:28.682846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.152 ms 00:20:44.930 [2024-11-21 01:46:28.682854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.930 [2024-11-21 01:46:28.683260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.930 [2024-11-21 01:46:28.683271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:44.930 [2024-11-21 01:46:28.683281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:20:44.930 [2024-11-21 01:46:28.683288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.930 [2024-11-21 01:46:28.722280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.930 [2024-11-21 01:46:28.722461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:44.931 [2024-11-21 01:46:28.722481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.931 [2024-11-21 01:46:28.722491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.931 [2024-11-21 01:46:28.722600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.931 [2024-11-21 01:46:28.722641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:44.931 [2024-11-21 01:46:28.722652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.931 [2024-11-21 01:46:28.722659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.931 [2024-11-21 01:46:28.722712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.931 [2024-11-21 01:46:28.722722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:44.931 [2024-11-21 01:46:28.722730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.931 [2024-11-21 01:46:28.722737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.931 [2024-11-21 01:46:28.722756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.931 [2024-11-21 01:46:28.722769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:44.931 [2024-11-21 01:46:28.722777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.931 [2024-11-21 01:46:28.722785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.931 [2024-11-21 01:46:28.806407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.931 [2024-11-21 01:46:28.806467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:44.931 [2024-11-21 01:46:28.806480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.931 [2024-11-21 01:46:28.806488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.931 [2024-11-21 01:46:28.875634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.931 [2024-11-21 01:46:28.875685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:44.931 [2024-11-21 01:46:28.875698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.931 [2024-11-21 01:46:28.875707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.931 [2024-11-21 01:46:28.875785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.931 [2024-11-21 01:46:28.875794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:44.931 [2024-11-21 01:46:28.875804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.931 [2024-11-21 01:46:28.875812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.931 [2024-11-21 01:46:28.875846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.931 [2024-11-21 01:46:28.875856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:44.931 [2024-11-21 01:46:28.875871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.931 [2024-11-21 01:46:28.875879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.931 [2024-11-21 01:46:28.875983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.931 [2024-11-21 01:46:28.875993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:44.931 [2024-11-21 01:46:28.876003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.931 [2024-11-21 01:46:28.876010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.931 [2024-11-21 01:46:28.876045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.931 [2024-11-21 01:46:28.876054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:44.931 [2024-11-21 01:46:28.876063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.931 [2024-11-21 01:46:28.876074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.931 [2024-11-21 01:46:28.876119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.931 [2024-11-21 01:46:28.876128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:44.931 [2024-11-21 01:46:28.876137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.931 [2024-11-21 01:46:28.876145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.931 [2024-11-21 01:46:28.876194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.931 [2024-11-21 01:46:28.876204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:44.931 [2024-11-21 01:46:28.876216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.931 [2024-11-21 01:46:28.876224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.931 [2024-11-21 01:46:28.876382] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 369.633 ms, result 0 00:20:45.872 00:20:45.872 00:20:45.872 01:46:29 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76898 00:20:45.872 01:46:29 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76898 00:20:45.872 01:46:29 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:45.872 01:46:29 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76898 ']' 00:20:45.872 01:46:29 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.872 01:46:29 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.872 01:46:29 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.872 01:46:29 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.872 01:46:29 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:45.872 [2024-11-21 01:46:29.731031] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:20:45.872 [2024-11-21 01:46:29.731187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76898 ] 00:20:46.133 [2024-11-21 01:46:29.891514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.133 [2024-11-21 01:46:30.012000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.076 01:46:30 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.076 01:46:30 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:47.076 01:46:30 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:47.076 [2024-11-21 01:46:30.919928] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:47.076 [2024-11-21 01:46:30.919989] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:47.338 [2024-11-21 01:46:31.094806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.338 [2024-11-21 01:46:31.094855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:47.338 [2024-11-21 01:46:31.094870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:47.338 [2024-11-21 01:46:31.094878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.338 [2024-11-21 01:46:31.097677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.338 [2024-11-21 01:46:31.097829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:47.338 [2024-11-21 01:46:31.097850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.778 ms 00:20:47.338 [2024-11-21 01:46:31.097858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.338 [2024-11-21 01:46:31.098248] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:47.338 [2024-11-21 01:46:31.099099] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:47.338 [2024-11-21 01:46:31.099135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.338 [2024-11-21 01:46:31.099144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:47.338 [2024-11-21 01:46:31.099155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:20:47.338 [2024-11-21 01:46:31.099163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.338 [2024-11-21 01:46:31.100477] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:47.338 [2024-11-21 01:46:31.113657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.338 [2024-11-21 01:46:31.113699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:47.338 [2024-11-21 01:46:31.113712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.187 ms 00:20:47.338 [2024-11-21 01:46:31.113722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.338 [2024-11-21 01:46:31.113810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.338 [2024-11-21 01:46:31.113824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:47.338 [2024-11-21 01:46:31.113832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:47.338 [2024-11-21 01:46:31.113841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.338 [2024-11-21 01:46:31.119687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.338 [2024-11-21 01:46:31.119723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:47.338 [2024-11-21 01:46:31.119733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.797 ms 00:20:47.338 [2024-11-21 01:46:31.119742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.338 [2024-11-21 01:46:31.119841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.338 [2024-11-21 01:46:31.119853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:47.338 [2024-11-21 01:46:31.119861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:47.338 [2024-11-21 01:46:31.119870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.338 [2024-11-21 01:46:31.119909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.338 [2024-11-21 01:46:31.119918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:47.338 [2024-11-21 01:46:31.119926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:47.338 [2024-11-21 01:46:31.119935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.338 [2024-11-21 01:46:31.119956] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:47.338 [2024-11-21 01:46:31.123628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.338 [2024-11-21 01:46:31.123659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:47.338 [2024-11-21 01:46:31.123670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.675 ms 00:20:47.338 [2024-11-21 01:46:31.123678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.338 [2024-11-21 01:46:31.123715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.338 [2024-11-21 01:46:31.123723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:47.338 [2024-11-21 01:46:31.123733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:47.338 [2024-11-21 01:46:31.123742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.338 [2024-11-21 01:46:31.123764] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:47.338 [2024-11-21 01:46:31.123781] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:47.338 [2024-11-21 01:46:31.123823] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:47.338 [2024-11-21 01:46:31.123837] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:47.338 [2024-11-21 01:46:31.123945] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:47.338 [2024-11-21 01:46:31.123955] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:47.338 [2024-11-21 01:46:31.123969] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:47.339 [2024-11-21 01:46:31.123981] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:47.339 [2024-11-21 01:46:31.123991] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:47.339 [2024-11-21 01:46:31.123999] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:47.339 [2024-11-21 01:46:31.124008] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:47.339 [2024-11-21 01:46:31.124015] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:47.339 [2024-11-21 01:46:31.124026] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:47.339 [2024-11-21 01:46:31.124033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.339 [2024-11-21 01:46:31.124042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:47.339 [2024-11-21 01:46:31.124049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:20:47.339 [2024-11-21 01:46:31.124058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.339 [2024-11-21 01:46:31.124173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.339 [2024-11-21 01:46:31.124185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:47.339 [2024-11-21 01:46:31.124192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:47.339 [2024-11-21 01:46:31.124201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.339 [2024-11-21 01:46:31.124302] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:47.339 [2024-11-21 01:46:31.124313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:47.339 [2024-11-21 01:46:31.124321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:47.339 [2024-11-21 01:46:31.124330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.339 [2024-11-21 01:46:31.124338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:47.339 [2024-11-21 01:46:31.124346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:47.339 [2024-11-21 01:46:31.124353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:47.339 [2024-11-21 01:46:31.124367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:47.339 [2024-11-21 01:46:31.124374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:47.339 [2024-11-21 01:46:31.124382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:47.339 [2024-11-21 01:46:31.124389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:47.339 [2024-11-21 01:46:31.124397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:47.339 [2024-11-21 01:46:31.124403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:47.339 [2024-11-21 01:46:31.124411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:47.339 [2024-11-21 01:46:31.124418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:47.339 [2024-11-21 01:46:31.124426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.339 [2024-11-21 01:46:31.124434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:47.339 [2024-11-21 01:46:31.124443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:47.339 [2024-11-21 01:46:31.124449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.339 [2024-11-21 01:46:31.124458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:47.339 [2024-11-21 01:46:31.124470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:47.339 [2024-11-21 01:46:31.124478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:47.339 [2024-11-21 01:46:31.124485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:47.339 [2024-11-21 01:46:31.124494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:47.339 [2024-11-21 01:46:31.124501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:47.339 [2024-11-21 01:46:31.124509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:47.339 [2024-11-21 01:46:31.124516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:47.339 [2024-11-21 01:46:31.124524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:47.339 [2024-11-21 01:46:31.124530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:47.339 [2024-11-21 01:46:31.124538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:47.339 [2024-11-21 01:46:31.124545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:47.339 [2024-11-21 01:46:31.124553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:47.339 [2024-11-21 01:46:31.124560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:47.339 [2024-11-21 01:46:31.124569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:47.339 [2024-11-21 01:46:31.124575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:47.339 [2024-11-21 01:46:31.124584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:47.339 [2024-11-21 01:46:31.124590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:47.339 [2024-11-21 01:46:31.124599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:47.339 [2024-11-21 01:46:31.124605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:47.339 [2024-11-21 01:46:31.124635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.339 [2024-11-21 01:46:31.124643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:47.339 [2024-11-21 01:46:31.124651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:47.339 [2024-11-21 01:46:31.124658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.339 [2024-11-21 01:46:31.124667] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:47.339 [2024-11-21 01:46:31.124675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:47.339 [2024-11-21 01:46:31.124687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:47.339 [2024-11-21 01:46:31.124711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:47.339 [2024-11-21 01:46:31.124721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:47.339 [2024-11-21 01:46:31.124730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:47.339 [2024-11-21 01:46:31.124739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:47.339 [2024-11-21 01:46:31.124746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:47.339 [2024-11-21 01:46:31.124761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:47.339 [2024-11-21 01:46:31.124768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:47.339 [2024-11-21 01:46:31.124778] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:47.339 [2024-11-21 01:46:31.124787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:47.339 [2024-11-21 01:46:31.124798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:47.339 [2024-11-21 01:46:31.124806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:47.339 [2024-11-21 01:46:31.124816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:47.339 [2024-11-21 01:46:31.124824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:47.339 [2024-11-21 01:46:31.124832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:47.339 [2024-11-21 01:46:31.124839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:47.339 [2024-11-21 01:46:31.124848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:47.339 [2024-11-21 01:46:31.124855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:47.339 [2024-11-21 01:46:31.124864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:47.339 [2024-11-21 01:46:31.124870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:47.339 [2024-11-21 01:46:31.124879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:47.339 [2024-11-21 01:46:31.124886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:47.339 [2024-11-21 01:46:31.124895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:47.339 [2024-11-21 01:46:31.124902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:47.339 [2024-11-21 01:46:31.124910] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:47.339 [2024-11-21 01:46:31.124918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:47.339 [2024-11-21 01:46:31.124929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:47.339 [2024-11-21 01:46:31.124937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:47.339 [2024-11-21 01:46:31.124946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:47.339 [2024-11-21 01:46:31.124952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:47.339 [2024-11-21 01:46:31.124961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.339 [2024-11-21 01:46:31.124968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:47.339 [2024-11-21 01:46:31.124977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:20:47.339 [2024-11-21 01:46:31.124984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.339 [2024-11-21 01:46:31.153715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.339 [2024-11-21 01:46:31.153757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:47.339 [2024-11-21 01:46:31.153771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.670 ms 00:20:47.339 [2024-11-21 01:46:31.153779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.339 [2024-11-21 01:46:31.153908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.339 [2024-11-21 01:46:31.153920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:47.339 [2024-11-21 01:46:31.153930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:47.340 [2024-11-21 01:46:31.153938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.340 [2024-11-21 01:46:31.186814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.340 [2024-11-21 01:46:31.186844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:47.340 [2024-11-21 01:46:31.186860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.853 ms 00:20:47.340 [2024-11-21 01:46:31.186867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.340 [2024-11-21 01:46:31.186937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.340 [2024-11-21 01:46:31.186946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:47.340 [2024-11-21 01:46:31.186956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:47.340 [2024-11-21 01:46:31.186963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.340 [2024-11-21 01:46:31.187297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.340 [2024-11-21 01:46:31.187311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:47.340 [2024-11-21 01:46:31.187323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:20:47.340 [2024-11-21 01:46:31.187330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.340 [2024-11-21 01:46:31.187461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.340 [2024-11-21 01:46:31.187469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:47.340 [2024-11-21 01:46:31.187479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:20:47.340 [2024-11-21 01:46:31.187486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.340 [2024-11-21 01:46:31.202000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.340 [2024-11-21 01:46:31.202022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:47.340 [2024-11-21 01:46:31.202033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.492 ms 00:20:47.340 [2024-11-21 01:46:31.202040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.340 [2024-11-21 01:46:31.215052] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:47.340 [2024-11-21 01:46:31.215083] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:47.340 [2024-11-21 01:46:31.215097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.340 [2024-11-21 01:46:31.215105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:47.340 [2024-11-21 01:46:31.215115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.965 ms 00:20:47.340 [2024-11-21 01:46:31.215122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.340 [2024-11-21 01:46:31.239567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.340 [2024-11-21 01:46:31.239598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:47.340 [2024-11-21 01:46:31.239623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.375 ms 00:20:47.340 [2024-11-21 01:46:31.239632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.340 [2024-11-21 01:46:31.251388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.340 [2024-11-21 01:46:31.251416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:47.340 [2024-11-21 01:46:31.251429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.683 ms 00:20:47.340 [2024-11-21 01:46:31.251436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.340 [2024-11-21 01:46:31.262919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.340 [2024-11-21 01:46:31.262948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:47.340 [2024-11-21 01:46:31.262959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.419 ms 00:20:47.340 [2024-11-21 01:46:31.262966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.340 [2024-11-21 01:46:31.263573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.340 [2024-11-21 01:46:31.263590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:47.340 [2024-11-21 01:46:31.263601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:20:47.340 [2024-11-21 01:46:31.263608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.601 [2024-11-21 01:46:31.335137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.601 [2024-11-21 01:46:31.335188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:47.601 [2024-11-21 01:46:31.335205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.482 ms 00:20:47.601 [2024-11-21 01:46:31.335214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.601 [2024-11-21 01:46:31.345814] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:47.601 [2024-11-21 01:46:31.360509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.601 [2024-11-21 01:46:31.360677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:47.601 [2024-11-21 01:46:31.360698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.152 ms 00:20:47.601 [2024-11-21 01:46:31.360708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.601 [2024-11-21 01:46:31.360789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.601 [2024-11-21 01:46:31.360802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:47.601 [2024-11-21 01:46:31.360811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:47.601 [2024-11-21 01:46:31.360820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.601 [2024-11-21 01:46:31.360868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.601 [2024-11-21 01:46:31.360879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:47.601 [2024-11-21 01:46:31.360887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:47.601 [2024-11-21 01:46:31.360896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.601 [2024-11-21 01:46:31.360921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.601 [2024-11-21 01:46:31.360931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:47.601 [2024-11-21 01:46:31.360939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:47.601 [2024-11-21 01:46:31.360950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.601 [2024-11-21 01:46:31.360980] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:47.601 [2024-11-21 01:46:31.360993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.601 [2024-11-21 01:46:31.361001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:47.601 [2024-11-21 01:46:31.361013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:47.601 [2024-11-21 01:46:31.361020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.601 [2024-11-21 01:46:31.385278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.601 [2024-11-21 01:46:31.385327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:47.601 [2024-11-21 01:46:31.385342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.231 ms 00:20:47.601 [2024-11-21 01:46:31.385349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.601 [2024-11-21 01:46:31.385447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.601 [2024-11-21 01:46:31.385458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:47.601 [2024-11-21 01:46:31.385468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:47.601 [2024-11-21 01:46:31.385477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.601 [2024-11-21 01:46:31.386570] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:47.601 [2024-11-21 01:46:31.389670] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 291.485 ms, result 0 00:20:47.601 [2024-11-21 01:46:31.391848] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:47.601 Some configs were skipped because the RPC state that can call them passed over. 00:20:47.601 01:46:31 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:47.861 [2024-11-21 01:46:31.632378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:47.861 [2024-11-21 01:46:31.632582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:47.861 [2024-11-21 01:46:31.633080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.366 ms 00:20:47.861 [2024-11-21 01:46:31.633143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:47.862 [2024-11-21 01:46:31.633342] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 4.322 ms, result 0 00:20:47.862 true 00:20:47.862 01:46:31 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:48.123 [2024-11-21 01:46:31.853914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.123 [2024-11-21 01:46:31.854101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:48.123 [2024-11-21 01:46:31.854128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.684 ms 00:20:48.123 [2024-11-21 01:46:31.854137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.123 [2024-11-21 01:46:31.854185] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.960 ms, result 0 00:20:48.123 true 00:20:48.123 01:46:31 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76898 00:20:48.123 01:46:31 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76898 ']' 00:20:48.123 01:46:31 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76898 00:20:48.123 01:46:31 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:48.123 01:46:31 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.123 01:46:31 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76898 00:20:48.123 killing process with pid 76898 00:20:48.123 01:46:31 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:48.123 01:46:31 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:48.123 01:46:31 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76898' 00:20:48.123 01:46:31 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76898 00:20:48.123 01:46:31 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76898 00:20:49.068 [2024-11-21 01:46:32.656058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.068 [2024-11-21 01:46:32.656133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:49.068 [2024-11-21 01:46:32.656149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:49.068 [2024-11-21 01:46:32.656159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.068 [2024-11-21 01:46:32.656185] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:49.068 [2024-11-21 01:46:32.659307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.068 [2024-11-21 01:46:32.659498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:49.068 [2024-11-21 01:46:32.659529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.100 ms 00:20:49.068 [2024-11-21 01:46:32.659538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.068 [2024-11-21 01:46:32.659898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.068 [2024-11-21 01:46:32.659912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:49.068 [2024-11-21 01:46:32.659924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:20:49.068 [2024-11-21 01:46:32.659933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.068 [2024-11-21 01:46:32.664603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.068 [2024-11-21 01:46:32.664655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:49.068 [2024-11-21 01:46:32.664671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.644 ms 00:20:49.068 [2024-11-21 01:46:32.664679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.068 [2024-11-21 01:46:32.671723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.068 [2024-11-21 01:46:32.671768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:49.068 [2024-11-21 01:46:32.671783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.990 ms 00:20:49.068 [2024-11-21 01:46:32.671791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.068 [2024-11-21 01:46:32.683138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.068 [2024-11-21 01:46:32.683168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:49.068 [2024-11-21 01:46:32.683182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.278 ms 00:20:49.068 [2024-11-21 01:46:32.683196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.068 [2024-11-21 01:46:32.690682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.068 [2024-11-21 01:46:32.690714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:49.068 [2024-11-21 01:46:32.690727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.446 ms 00:20:49.068 [2024-11-21 01:46:32.690735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.068 [2024-11-21 01:46:32.690868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.068 [2024-11-21 01:46:32.690878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:49.069 [2024-11-21 01:46:32.690888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:49.069 [2024-11-21 01:46:32.690895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.069 [2024-11-21 01:46:32.701724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.069 [2024-11-21 01:46:32.701848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:49.069 [2024-11-21 01:46:32.701867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.808 ms 00:20:49.069 [2024-11-21 01:46:32.701874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.069 [2024-11-21 01:46:32.711952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.069 [2024-11-21 01:46:32.711981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:49.069 [2024-11-21 01:46:32.711995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.042 ms 00:20:49.069 [2024-11-21 01:46:32.712002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.069 [2024-11-21 01:46:32.721733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.069 [2024-11-21 01:46:32.721761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:49.069 [2024-11-21 01:46:32.721775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.693 ms 00:20:49.069 [2024-11-21 01:46:32.721782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.069 [2024-11-21 01:46:32.731214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.069 [2024-11-21 01:46:32.731242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:49.069 [2024-11-21 01:46:32.731253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.369 ms 00:20:49.069 [2024-11-21 01:46:32.731259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.069 [2024-11-21 01:46:32.731295] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:49.069 [2024-11-21 01:46:32.731309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:49.069 [2024-11-21 01:46:32.731802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.731998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.732006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.732014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.732025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.732032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.732041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.732048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.732057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.732064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.732073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.732079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.732088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.732096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.732105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.732112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.732121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.732128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.732138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:49.070 [2024-11-21 01:46:32.732153] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:49.070 [2024-11-21 01:46:32.732166] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 45bfb092-704a-4e18-9717-d1701cdaabcd 00:20:49.070 [2024-11-21 01:46:32.732179] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:49.070 [2024-11-21 01:46:32.732190] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:49.070 [2024-11-21 01:46:32.732197] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:49.070 [2024-11-21 01:46:32.732206] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:49.070 [2024-11-21 01:46:32.732213] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:49.070 [2024-11-21 01:46:32.732222] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:49.070 [2024-11-21 01:46:32.732229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:49.070 [2024-11-21 01:46:32.732237] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:49.070 [2024-11-21 01:46:32.732243] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:49.070 [2024-11-21 01:46:32.732251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.070 [2024-11-21 01:46:32.732259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:49.070 [2024-11-21 01:46:32.732269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:20:49.070 [2024-11-21 01:46:32.732276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.070 [2024-11-21 01:46:32.745243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.070 [2024-11-21 01:46:32.745364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:49.070 [2024-11-21 01:46:32.745423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.932 ms 00:20:49.070 [2024-11-21 01:46:32.745447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.070 [2024-11-21 01:46:32.745858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.070 [2024-11-21 01:46:32.745900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:49.070 [2024-11-21 01:46:32.745968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:20:49.070 [2024-11-21 01:46:32.745992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.070 [2024-11-21 01:46:32.791934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.070 [2024-11-21 01:46:32.792068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:49.070 [2024-11-21 01:46:32.792127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.070 [2024-11-21 01:46:32.792150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.070 [2024-11-21 01:46:32.792262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.070 [2024-11-21 01:46:32.792288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:49.070 [2024-11-21 01:46:32.792309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.070 [2024-11-21 01:46:32.792330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.070 [2024-11-21 01:46:32.792394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.070 [2024-11-21 01:46:32.792490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:49.070 [2024-11-21 01:46:32.792514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.070 [2024-11-21 01:46:32.792533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.070 [2024-11-21 01:46:32.792564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.071 [2024-11-21 01:46:32.792584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:49.071 [2024-11-21 01:46:32.792604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.071 [2024-11-21 01:46:32.792698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.071 [2024-11-21 01:46:32.875686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.071 [2024-11-21 01:46:32.875904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:49.071 [2024-11-21 01:46:32.875971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.071 [2024-11-21 01:46:32.875995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.071 [2024-11-21 01:46:32.931342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.071 [2024-11-21 01:46:32.931495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:49.071 [2024-11-21 01:46:32.931546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.071 [2024-11-21 01:46:32.931568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.071 [2024-11-21 01:46:32.931671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.071 [2024-11-21 01:46:32.931693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:49.071 [2024-11-21 01:46:32.931713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.071 [2024-11-21 01:46:32.931728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.071 [2024-11-21 01:46:32.931765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.071 [2024-11-21 01:46:32.931781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:49.071 [2024-11-21 01:46:32.931799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.071 [2024-11-21 01:46:32.931868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.071 [2024-11-21 01:46:32.931972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.071 [2024-11-21 01:46:32.931991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:49.071 [2024-11-21 01:46:32.932009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.071 [2024-11-21 01:46:32.932023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.071 [2024-11-21 01:46:32.932063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.071 [2024-11-21 01:46:32.932150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:49.071 [2024-11-21 01:46:32.932167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.071 [2024-11-21 01:46:32.932182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.071 [2024-11-21 01:46:32.932226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.071 [2024-11-21 01:46:32.932245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:49.071 [2024-11-21 01:46:32.932312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.071 [2024-11-21 01:46:32.932332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.071 [2024-11-21 01:46:32.932390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.071 [2024-11-21 01:46:32.932409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:49.071 [2024-11-21 01:46:32.932426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.071 [2024-11-21 01:46:32.932441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.071 [2024-11-21 01:46:32.932571] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 276.493 ms, result 0 00:20:49.644 01:46:33 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:49.644 [2024-11-21 01:46:33.512214] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:20:49.644 [2024-11-21 01:46:33.512521] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76951 ] 00:20:49.904 [2024-11-21 01:46:33.667555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.904 [2024-11-21 01:46:33.745137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.166 [2024-11-21 01:46:33.950588] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:50.166 [2024-11-21 01:46:33.950649] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:50.166 [2024-11-21 01:46:34.102357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.166 [2024-11-21 01:46:34.102393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:50.166 [2024-11-21 01:46:34.102404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:50.166 [2024-11-21 01:46:34.102411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.166 [2024-11-21 01:46:34.104503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.166 [2024-11-21 01:46:34.104694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:50.166 [2024-11-21 01:46:34.104708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.080 ms 00:20:50.166 [2024-11-21 01:46:34.104714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.166 [2024-11-21 01:46:34.104771] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:50.166 [2024-11-21 01:46:34.105332] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:50.166 [2024-11-21 01:46:34.105349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.166 [2024-11-21 01:46:34.105356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:50.166 [2024-11-21 01:46:34.105363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:20:50.166 [2024-11-21 01:46:34.105369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.166 [2024-11-21 01:46:34.106346] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:50.166 [2024-11-21 01:46:34.115806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.166 [2024-11-21 01:46:34.115919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:50.166 [2024-11-21 01:46:34.115933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.461 ms 00:20:50.166 [2024-11-21 01:46:34.115940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.166 [2024-11-21 01:46:34.116004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.166 [2024-11-21 01:46:34.116013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:50.166 [2024-11-21 01:46:34.116019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:50.166 [2024-11-21 01:46:34.116025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.427 [2024-11-21 01:46:34.120386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.427 [2024-11-21 01:46:34.120410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:50.427 [2024-11-21 01:46:34.120418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.333 ms 00:20:50.427 [2024-11-21 01:46:34.120423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.427 [2024-11-21 01:46:34.120492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.427 [2024-11-21 01:46:34.120500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:50.427 [2024-11-21 01:46:34.120507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:50.427 [2024-11-21 01:46:34.120512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.427 [2024-11-21 01:46:34.120530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.427 [2024-11-21 01:46:34.120538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:50.427 [2024-11-21 01:46:34.120544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:50.427 [2024-11-21 01:46:34.120549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.427 [2024-11-21 01:46:34.120566] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:50.427 [2024-11-21 01:46:34.123166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.427 [2024-11-21 01:46:34.123270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:50.427 [2024-11-21 01:46:34.123283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.603 ms 00:20:50.427 [2024-11-21 01:46:34.123289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.427 [2024-11-21 01:46:34.123318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.427 [2024-11-21 01:46:34.123324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:50.427 [2024-11-21 01:46:34.123330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:50.427 [2024-11-21 01:46:34.123336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.427 [2024-11-21 01:46:34.123349] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:50.427 [2024-11-21 01:46:34.123366] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:50.427 [2024-11-21 01:46:34.123391] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:50.427 [2024-11-21 01:46:34.123403] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:50.427 [2024-11-21 01:46:34.123482] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:50.427 [2024-11-21 01:46:34.123490] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:50.427 [2024-11-21 01:46:34.123498] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:50.427 [2024-11-21 01:46:34.123506] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:50.427 [2024-11-21 01:46:34.123514] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:50.427 [2024-11-21 01:46:34.123521] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:50.427 [2024-11-21 01:46:34.123527] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:50.427 [2024-11-21 01:46:34.123533] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:50.428 [2024-11-21 01:46:34.123538] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:50.428 [2024-11-21 01:46:34.123543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.428 [2024-11-21 01:46:34.123549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:50.428 [2024-11-21 01:46:34.123555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:20:50.428 [2024-11-21 01:46:34.123561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.428 [2024-11-21 01:46:34.123642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.428 [2024-11-21 01:46:34.123649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:50.428 [2024-11-21 01:46:34.123657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:50.428 [2024-11-21 01:46:34.123663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.428 [2024-11-21 01:46:34.123741] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:50.428 [2024-11-21 01:46:34.123749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:50.428 [2024-11-21 01:46:34.123756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:50.428 [2024-11-21 01:46:34.123762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.428 [2024-11-21 01:46:34.123768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:50.428 [2024-11-21 01:46:34.123773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:50.428 [2024-11-21 01:46:34.123778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:50.428 [2024-11-21 01:46:34.123783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:50.428 [2024-11-21 01:46:34.123790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:50.428 [2024-11-21 01:46:34.123795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:50.428 [2024-11-21 01:46:34.123801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:50.428 [2024-11-21 01:46:34.123806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:50.428 [2024-11-21 01:46:34.123811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:50.428 [2024-11-21 01:46:34.123822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:50.428 [2024-11-21 01:46:34.123829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:50.428 [2024-11-21 01:46:34.123835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.428 [2024-11-21 01:46:34.123844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:50.428 [2024-11-21 01:46:34.123851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:50.428 [2024-11-21 01:46:34.123857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.428 [2024-11-21 01:46:34.123864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:50.428 [2024-11-21 01:46:34.123870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:50.428 [2024-11-21 01:46:34.123877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.428 [2024-11-21 01:46:34.123883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:50.428 [2024-11-21 01:46:34.123889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:50.428 [2024-11-21 01:46:34.123895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.428 [2024-11-21 01:46:34.123901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:50.428 [2024-11-21 01:46:34.123908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:50.428 [2024-11-21 01:46:34.123914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.428 [2024-11-21 01:46:34.123920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:50.428 [2024-11-21 01:46:34.123927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:50.428 [2024-11-21 01:46:34.123932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:50.428 [2024-11-21 01:46:34.123939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:50.428 [2024-11-21 01:46:34.123945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:50.428 [2024-11-21 01:46:34.123951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:50.428 [2024-11-21 01:46:34.123957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:50.428 [2024-11-21 01:46:34.123963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:50.428 [2024-11-21 01:46:34.123969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:50.428 [2024-11-21 01:46:34.123975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:50.428 [2024-11-21 01:46:34.123982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:50.428 [2024-11-21 01:46:34.123988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.428 [2024-11-21 01:46:34.123993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:50.428 [2024-11-21 01:46:34.124000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:50.428 [2024-11-21 01:46:34.124006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.428 [2024-11-21 01:46:34.124013] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:50.428 [2024-11-21 01:46:34.124019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:50.428 [2024-11-21 01:46:34.124026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:50.428 [2024-11-21 01:46:34.124034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:50.428 [2024-11-21 01:46:34.124039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:50.428 [2024-11-21 01:46:34.124046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:50.428 [2024-11-21 01:46:34.124051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:50.428 [2024-11-21 01:46:34.124063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:50.428 [2024-11-21 01:46:34.124068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:50.428 [2024-11-21 01:46:34.124073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:50.428 [2024-11-21 01:46:34.124079] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:50.428 [2024-11-21 01:46:34.124086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:50.428 [2024-11-21 01:46:34.124093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:50.428 [2024-11-21 01:46:34.124099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:50.428 [2024-11-21 01:46:34.124104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:50.428 [2024-11-21 01:46:34.124110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:50.428 [2024-11-21 01:46:34.124115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:50.428 [2024-11-21 01:46:34.124120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:50.428 [2024-11-21 01:46:34.124125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:50.428 [2024-11-21 01:46:34.124131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:50.428 [2024-11-21 01:46:34.124136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:50.428 [2024-11-21 01:46:34.124142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:50.428 [2024-11-21 01:46:34.124147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:50.428 [2024-11-21 01:46:34.124153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:50.428 [2024-11-21 01:46:34.124158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:50.428 [2024-11-21 01:46:34.124164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:50.428 [2024-11-21 01:46:34.124170] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:50.428 [2024-11-21 01:46:34.124176] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:50.428 [2024-11-21 01:46:34.124182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:50.428 [2024-11-21 01:46:34.124187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:50.428 [2024-11-21 01:46:34.124193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:50.428 [2024-11-21 01:46:34.124198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:50.428 [2024-11-21 01:46:34.124204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.428 [2024-11-21 01:46:34.124209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:50.428 [2024-11-21 01:46:34.124217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:20:50.428 [2024-11-21 01:46:34.124222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.428 [2024-11-21 01:46:34.144944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.428 [2024-11-21 01:46:34.144974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:50.428 [2024-11-21 01:46:34.144982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.685 ms 00:20:50.428 [2024-11-21 01:46:34.144988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.428 [2024-11-21 01:46:34.145078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.428 [2024-11-21 01:46:34.145089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:50.428 [2024-11-21 01:46:34.145095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:50.428 [2024-11-21 01:46:34.145101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.428 [2024-11-21 01:46:34.185019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.428 [2024-11-21 01:46:34.185131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:50.428 [2024-11-21 01:46:34.185145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.902 ms 00:20:50.428 [2024-11-21 01:46:34.185155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.429 [2024-11-21 01:46:34.185212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.429 [2024-11-21 01:46:34.185221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:50.429 [2024-11-21 01:46:34.185228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:50.429 [2024-11-21 01:46:34.185234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.429 [2024-11-21 01:46:34.185536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.429 [2024-11-21 01:46:34.185548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:50.429 [2024-11-21 01:46:34.185556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:20:50.429 [2024-11-21 01:46:34.185561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.429 [2024-11-21 01:46:34.185685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.429 [2024-11-21 01:46:34.185694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:50.429 [2024-11-21 01:46:34.185700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:50.429 [2024-11-21 01:46:34.185706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.429 [2024-11-21 01:46:34.196387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.429 [2024-11-21 01:46:34.196481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:50.429 [2024-11-21 01:46:34.196493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.665 ms 00:20:50.429 [2024-11-21 01:46:34.196499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.429 [2024-11-21 01:46:34.206504] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:50.429 [2024-11-21 01:46:34.206533] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:50.429 [2024-11-21 01:46:34.206542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.429 [2024-11-21 01:46:34.206548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:50.429 [2024-11-21 01:46:34.206555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.955 ms 00:20:50.429 [2024-11-21 01:46:34.206560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.429 [2024-11-21 01:46:34.224852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.429 [2024-11-21 01:46:34.224894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:50.429 [2024-11-21 01:46:34.224903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.220 ms 00:20:50.429 [2024-11-21 01:46:34.224909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.429 [2024-11-21 01:46:34.233865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.429 [2024-11-21 01:46:34.233890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:50.429 [2024-11-21 01:46:34.233897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.914 ms 00:20:50.429 [2024-11-21 01:46:34.233903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.429 [2024-11-21 01:46:34.242623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.429 [2024-11-21 01:46:34.242648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:50.429 [2024-11-21 01:46:34.242656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.680 ms 00:20:50.429 [2024-11-21 01:46:34.242661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.429 [2024-11-21 01:46:34.243116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.429 [2024-11-21 01:46:34.243135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:50.429 [2024-11-21 01:46:34.243142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:20:50.429 [2024-11-21 01:46:34.243148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.429 [2024-11-21 01:46:34.286919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.429 [2024-11-21 01:46:34.286953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:50.429 [2024-11-21 01:46:34.286964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.754 ms 00:20:50.429 [2024-11-21 01:46:34.286971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.429 [2024-11-21 01:46:34.294883] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:50.429 [2024-11-21 01:46:34.306324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.429 [2024-11-21 01:46:34.306449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:50.429 [2024-11-21 01:46:34.306463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.289 ms 00:20:50.429 [2024-11-21 01:46:34.306470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.429 [2024-11-21 01:46:34.306549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.429 [2024-11-21 01:46:34.306557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:50.429 [2024-11-21 01:46:34.306564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:50.429 [2024-11-21 01:46:34.306570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.429 [2024-11-21 01:46:34.306605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.429 [2024-11-21 01:46:34.306628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:50.429 [2024-11-21 01:46:34.306635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:50.429 [2024-11-21 01:46:34.306641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.429 [2024-11-21 01:46:34.306663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.429 [2024-11-21 01:46:34.306672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:50.429 [2024-11-21 01:46:34.306678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:50.429 [2024-11-21 01:46:34.306684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.429 [2024-11-21 01:46:34.306707] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:50.429 [2024-11-21 01:46:34.306714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.429 [2024-11-21 01:46:34.306720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:50.429 [2024-11-21 01:46:34.306726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:50.429 [2024-11-21 01:46:34.306732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.429 [2024-11-21 01:46:34.324519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.429 [2024-11-21 01:46:34.324547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:50.429 [2024-11-21 01:46:34.324556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.773 ms 00:20:50.429 [2024-11-21 01:46:34.324562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.429 [2024-11-21 01:46:34.324646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.429 [2024-11-21 01:46:34.324655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:50.429 [2024-11-21 01:46:34.324678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:50.429 [2024-11-21 01:46:34.324684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.429 [2024-11-21 01:46:34.325419] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:50.429 [2024-11-21 01:46:34.327653] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 222.834 ms, result 0 00:20:50.429 [2024-11-21 01:46:34.328543] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:50.429 [2024-11-21 01:46:34.343329] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:51.817  [2024-11-21T01:46:36.717Z] Copying: 36/256 [MB] (36 MBps) [2024-11-21T01:46:37.661Z] Copying: 52/256 [MB] (15 MBps) [2024-11-21T01:46:38.605Z] Copying: 68/256 [MB] (16 MBps) [2024-11-21T01:46:39.548Z] Copying: 84/256 [MB] (16 MBps) [2024-11-21T01:46:40.491Z] Copying: 95/256 [MB] (10 MBps) [2024-11-21T01:46:41.504Z] Copying: 106/256 [MB] (10 MBps) [2024-11-21T01:46:42.447Z] Copying: 128/256 [MB] (21 MBps) [2024-11-21T01:46:43.390Z] Copying: 139/256 [MB] (11 MBps) [2024-11-21T01:46:44.776Z] Copying: 155/256 [MB] (15 MBps) [2024-11-21T01:46:45.721Z] Copying: 167/256 [MB] (12 MBps) [2024-11-21T01:46:46.665Z] Copying: 191/256 [MB] (23 MBps) [2024-11-21T01:46:47.609Z] Copying: 207/256 [MB] (15 MBps) [2024-11-21T01:46:48.552Z] Copying: 225/256 [MB] (18 MBps) [2024-11-21T01:46:49.124Z] Copying: 245/256 [MB] (19 MBps) [2024-11-21T01:46:49.392Z] Copying: 256/256 [MB] (average 17 MBps)[2024-11-21 01:46:49.204390] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:05.435 [2024-11-21 01:46:49.214962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.435 [2024-11-21 01:46:49.215203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:05.435 [2024-11-21 01:46:49.215229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:05.435 [2024-11-21 01:46:49.215251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.435 [2024-11-21 01:46:49.215290] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:05.435 [2024-11-21 01:46:49.219364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.435 [2024-11-21 01:46:49.219539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:05.435 [2024-11-21 01:46:49.219561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.058 ms 00:21:05.435 [2024-11-21 01:46:49.219571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.435 [2024-11-21 01:46:49.219890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.435 [2024-11-21 01:46:49.219903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:05.435 [2024-11-21 01:46:49.219913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:21:05.435 [2024-11-21 01:46:49.219922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.435 [2024-11-21 01:46:49.223948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.435 [2024-11-21 01:46:49.223984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:05.435 [2024-11-21 01:46:49.223994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.010 ms 00:21:05.435 [2024-11-21 01:46:49.224003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.435 [2024-11-21 01:46:49.231489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.435 [2024-11-21 01:46:49.231674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:05.435 [2024-11-21 01:46:49.231697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.464 ms 00:21:05.435 [2024-11-21 01:46:49.231707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.435 [2024-11-21 01:46:49.258447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.435 [2024-11-21 01:46:49.258497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:05.435 [2024-11-21 01:46:49.258511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.664 ms 00:21:05.435 [2024-11-21 01:46:49.258519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.435 [2024-11-21 01:46:49.274151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.435 [2024-11-21 01:46:49.274333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:05.435 [2024-11-21 01:46:49.274355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.577 ms 00:21:05.435 [2024-11-21 01:46:49.274371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.435 [2024-11-21 01:46:49.274511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.435 [2024-11-21 01:46:49.274522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:05.435 [2024-11-21 01:46:49.274532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:21:05.435 [2024-11-21 01:46:49.274540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.435 [2024-11-21 01:46:49.300299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.435 [2024-11-21 01:46:49.300345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:05.435 [2024-11-21 01:46:49.300358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.731 ms 00:21:05.435 [2024-11-21 01:46:49.300365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.435 [2024-11-21 01:46:49.325485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.435 [2024-11-21 01:46:49.325530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:05.435 [2024-11-21 01:46:49.325543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.053 ms 00:21:05.435 [2024-11-21 01:46:49.325549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.435 [2024-11-21 01:46:49.349873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.435 [2024-11-21 01:46:49.349916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:05.435 [2024-11-21 01:46:49.349929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.275 ms 00:21:05.435 [2024-11-21 01:46:49.349937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.435 [2024-11-21 01:46:49.374402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.435 [2024-11-21 01:46:49.374448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:05.435 [2024-11-21 01:46:49.374460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.382 ms 00:21:05.435 [2024-11-21 01:46:49.374468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.435 [2024-11-21 01:46:49.374516] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:05.435 [2024-11-21 01:46:49.374531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.374995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.375003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.375011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.375018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:05.435 [2024-11-21 01:46:49.375026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:05.436 [2024-11-21 01:46:49.375371] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:05.436 [2024-11-21 01:46:49.375379] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 45bfb092-704a-4e18-9717-d1701cdaabcd 00:21:05.436 [2024-11-21 01:46:49.375388] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:05.436 [2024-11-21 01:46:49.375397] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:05.436 [2024-11-21 01:46:49.375405] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:05.436 [2024-11-21 01:46:49.375414] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:05.436 [2024-11-21 01:46:49.375422] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:05.436 [2024-11-21 01:46:49.375430] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:05.436 [2024-11-21 01:46:49.375438] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:05.436 [2024-11-21 01:46:49.375445] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:05.436 [2024-11-21 01:46:49.375451] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:05.436 [2024-11-21 01:46:49.375459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.436 [2024-11-21 01:46:49.375471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:05.436 [2024-11-21 01:46:49.375481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:21:05.436 [2024-11-21 01:46:49.375489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.697 [2024-11-21 01:46:49.389268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.697 [2024-11-21 01:46:49.389330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:05.697 [2024-11-21 01:46:49.389343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.743 ms 00:21:05.697 [2024-11-21 01:46:49.389352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.697 [2024-11-21 01:46:49.389786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.697 [2024-11-21 01:46:49.389806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:05.697 [2024-11-21 01:46:49.389816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:21:05.697 [2024-11-21 01:46:49.389824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.697 [2024-11-21 01:46:49.428661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.697 [2024-11-21 01:46:49.428709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:05.697 [2024-11-21 01:46:49.428721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.697 [2024-11-21 01:46:49.428729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.697 [2024-11-21 01:46:49.428840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.697 [2024-11-21 01:46:49.428850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:05.697 [2024-11-21 01:46:49.428859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.697 [2024-11-21 01:46:49.428867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.697 [2024-11-21 01:46:49.428917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.697 [2024-11-21 01:46:49.428927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:05.697 [2024-11-21 01:46:49.428936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.697 [2024-11-21 01:46:49.428943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.697 [2024-11-21 01:46:49.428963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.697 [2024-11-21 01:46:49.428975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:05.697 [2024-11-21 01:46:49.428983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.697 [2024-11-21 01:46:49.428992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.697 [2024-11-21 01:46:49.514046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.697 [2024-11-21 01:46:49.514112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:05.697 [2024-11-21 01:46:49.514125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.697 [2024-11-21 01:46:49.514134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.697 [2024-11-21 01:46:49.583304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.697 [2024-11-21 01:46:49.583368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:05.697 [2024-11-21 01:46:49.583387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.697 [2024-11-21 01:46:49.583396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.697 [2024-11-21 01:46:49.583475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.697 [2024-11-21 01:46:49.583485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:05.697 [2024-11-21 01:46:49.583494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.697 [2024-11-21 01:46:49.583502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.697 [2024-11-21 01:46:49.583536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.697 [2024-11-21 01:46:49.583544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:05.697 [2024-11-21 01:46:49.583557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.697 [2024-11-21 01:46:49.583565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.697 [2024-11-21 01:46:49.583697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.697 [2024-11-21 01:46:49.583708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:05.697 [2024-11-21 01:46:49.583718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.697 [2024-11-21 01:46:49.583726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.697 [2024-11-21 01:46:49.583781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.697 [2024-11-21 01:46:49.583791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:05.697 [2024-11-21 01:46:49.583800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.697 [2024-11-21 01:46:49.583811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.697 [2024-11-21 01:46:49.583854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.697 [2024-11-21 01:46:49.583863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:05.697 [2024-11-21 01:46:49.583872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.697 [2024-11-21 01:46:49.583881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.697 [2024-11-21 01:46:49.583930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:05.697 [2024-11-21 01:46:49.583940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:05.697 [2024-11-21 01:46:49.583953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:05.697 [2024-11-21 01:46:49.583961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.698 [2024-11-21 01:46:49.584120] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 369.156 ms, result 0 00:21:06.641 00:21:06.641 00:21:06.641 01:46:50 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:07.214 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:07.214 01:46:50 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:07.214 01:46:50 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:07.214 01:46:50 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:07.214 01:46:50 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:07.214 01:46:50 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:07.214 01:46:50 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:07.214 Process with pid 76898 is not found 00:21:07.214 01:46:50 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76898 00:21:07.214 01:46:50 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76898 ']' 00:21:07.214 01:46:50 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76898 00:21:07.214 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76898) - No such process 00:21:07.214 01:46:50 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76898 is not found' 00:21:07.214 ************************************ 00:21:07.214 END TEST ftl_trim 00:21:07.214 ************************************ 00:21:07.214 00:21:07.214 real 1m26.403s 00:21:07.214 user 1m41.768s 00:21:07.214 sys 0m16.950s 00:21:07.214 01:46:50 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.214 01:46:50 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:07.214 01:46:51 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:07.214 01:46:51 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:07.214 01:46:51 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.214 01:46:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:07.214 ************************************ 00:21:07.214 START TEST ftl_restore 00:21:07.214 ************************************ 00:21:07.214 01:46:51 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:07.214 * Looking for test storage... 00:21:07.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:07.214 01:46:51 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:07.214 01:46:51 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:21:07.214 01:46:51 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:07.476 01:46:51 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:07.476 01:46:51 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:21:07.476 01:46:51 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:07.476 01:46:51 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:07.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.476 --rc genhtml_branch_coverage=1 00:21:07.476 --rc genhtml_function_coverage=1 00:21:07.476 --rc genhtml_legend=1 00:21:07.476 --rc geninfo_all_blocks=1 00:21:07.476 --rc geninfo_unexecuted_blocks=1 00:21:07.476 00:21:07.476 ' 00:21:07.476 01:46:51 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:07.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.476 --rc genhtml_branch_coverage=1 00:21:07.476 --rc genhtml_function_coverage=1 00:21:07.476 --rc genhtml_legend=1 00:21:07.476 --rc geninfo_all_blocks=1 00:21:07.476 --rc geninfo_unexecuted_blocks=1 00:21:07.476 00:21:07.476 ' 00:21:07.476 01:46:51 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:07.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.476 --rc genhtml_branch_coverage=1 00:21:07.476 --rc genhtml_function_coverage=1 00:21:07.476 --rc genhtml_legend=1 00:21:07.476 --rc geninfo_all_blocks=1 00:21:07.476 --rc geninfo_unexecuted_blocks=1 00:21:07.476 00:21:07.476 ' 00:21:07.476 01:46:51 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:07.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.476 --rc genhtml_branch_coverage=1 00:21:07.476 --rc genhtml_function_coverage=1 00:21:07.476 --rc genhtml_legend=1 00:21:07.476 --rc geninfo_all_blocks=1 00:21:07.476 --rc geninfo_unexecuted_blocks=1 00:21:07.476 00:21:07.476 ' 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:07.476 01:46:51 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:07.477 01:46:51 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:07.477 01:46:51 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:07.477 01:46:51 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:07.477 01:46:51 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:07.477 01:46:51 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.477 01:46:51 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:07.477 01:46:51 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.vJkpPeAEnT 00:21:07.477 01:46:51 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:07.477 01:46:51 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:07.477 01:46:51 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:07.477 01:46:51 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:07.477 01:46:51 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:07.477 01:46:51 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:07.477 01:46:51 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:07.477 01:46:51 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:07.477 01:46:51 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77200 00:21:07.477 01:46:51 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77200 00:21:07.477 01:46:51 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.477 01:46:51 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77200 ']' 00:21:07.477 01:46:51 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.477 01:46:51 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.477 01:46:51 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.477 01:46:51 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.477 01:46:51 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:07.477 [2024-11-21 01:46:51.313737] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:21:07.477 [2024-11-21 01:46:51.314152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77200 ] 00:21:07.739 [2024-11-21 01:46:51.480085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.739 [2024-11-21 01:46:51.598125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.310 01:46:52 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:08.310 01:46:52 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:21:08.310 01:46:52 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:08.310 01:46:52 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:21:08.310 01:46:52 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:08.310 01:46:52 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:21:08.310 01:46:52 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:21:08.310 01:46:52 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:08.571 01:46:52 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:08.571 01:46:52 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:21:08.571 01:46:52 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:08.571 01:46:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:08.571 01:46:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:08.571 01:46:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:08.571 01:46:52 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:08.571 01:46:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:08.831 01:46:52 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:08.831 { 00:21:08.831 "name": "nvme0n1", 00:21:08.831 "aliases": [ 00:21:08.831 "fc1ed663-436e-4f22-b404-be9d211face7" 00:21:08.831 ], 00:21:08.831 "product_name": "NVMe disk", 00:21:08.831 "block_size": 4096, 00:21:08.831 "num_blocks": 1310720, 00:21:08.831 "uuid": "fc1ed663-436e-4f22-b404-be9d211face7", 00:21:08.831 "numa_id": -1, 00:21:08.831 "assigned_rate_limits": { 00:21:08.831 "rw_ios_per_sec": 0, 00:21:08.831 "rw_mbytes_per_sec": 0, 00:21:08.831 "r_mbytes_per_sec": 0, 00:21:08.831 "w_mbytes_per_sec": 0 00:21:08.831 }, 00:21:08.831 "claimed": true, 00:21:08.831 "claim_type": "read_many_write_one", 00:21:08.831 "zoned": false, 00:21:08.831 "supported_io_types": { 00:21:08.831 "read": true, 00:21:08.831 "write": true, 00:21:08.831 "unmap": true, 00:21:08.831 "flush": true, 00:21:08.831 "reset": true, 00:21:08.831 "nvme_admin": true, 00:21:08.831 "nvme_io": true, 00:21:08.831 "nvme_io_md": false, 00:21:08.832 "write_zeroes": true, 00:21:08.832 "zcopy": false, 00:21:08.832 "get_zone_info": false, 00:21:08.832 "zone_management": false, 00:21:08.832 "zone_append": false, 00:21:08.832 "compare": true, 00:21:08.832 "compare_and_write": false, 00:21:08.832 "abort": true, 00:21:08.832 "seek_hole": false, 00:21:08.832 "seek_data": false, 00:21:08.832 "copy": true, 00:21:08.832 "nvme_iov_md": false 00:21:08.832 }, 00:21:08.832 "driver_specific": { 00:21:08.832 "nvme": [ 00:21:08.832 { 00:21:08.832 "pci_address": "0000:00:11.0", 00:21:08.832 "trid": { 00:21:08.832 "trtype": "PCIe", 00:21:08.832 "traddr": "0000:00:11.0" 00:21:08.832 }, 00:21:08.832 "ctrlr_data": { 00:21:08.832 "cntlid": 0, 00:21:08.832 "vendor_id": "0x1b36", 00:21:08.832 "model_number": "QEMU NVMe Ctrl", 00:21:08.832 "serial_number": "12341", 00:21:08.832 "firmware_revision": "8.0.0", 00:21:08.832 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:08.832 "oacs": { 00:21:08.832 "security": 0, 00:21:08.832 "format": 1, 00:21:08.832 "firmware": 0, 00:21:08.832 "ns_manage": 1 00:21:08.832 }, 00:21:08.832 "multi_ctrlr": false, 00:21:08.832 "ana_reporting": false 00:21:08.832 }, 00:21:08.832 "vs": { 00:21:08.832 "nvme_version": "1.4" 00:21:08.832 }, 00:21:08.832 "ns_data": { 00:21:08.832 "id": 1, 00:21:08.832 "can_share": false 00:21:08.832 } 00:21:08.832 } 00:21:08.832 ], 00:21:08.832 "mp_policy": "active_passive" 00:21:08.832 } 00:21:08.832 } 00:21:08.832 ]' 00:21:08.832 01:46:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:08.832 01:46:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:08.832 01:46:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:09.128 01:46:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:09.128 01:46:52 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:09.128 01:46:52 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:21:09.128 01:46:52 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:21:09.128 01:46:52 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:09.128 01:46:52 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:21:09.128 01:46:52 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:09.128 01:46:52 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:09.128 01:46:53 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=529f01b7-ed5c-4cb7-a021-1818560a41dd 00:21:09.128 01:46:53 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:21:09.128 01:46:53 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 529f01b7-ed5c-4cb7-a021-1818560a41dd 00:21:09.387 01:46:53 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:09.647 01:46:53 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=342d81aa-c9f5-410e-8bbc-840dc7fcdd1c 00:21:09.647 01:46:53 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 342d81aa-c9f5-410e-8bbc-840dc7fcdd1c 00:21:09.908 01:46:53 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=4ac4b6c9-6b87-4fcc-bc05-db5a5a473503 00:21:09.908 01:46:53 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:09.908 01:46:53 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4ac4b6c9-6b87-4fcc-bc05-db5a5a473503 00:21:09.908 01:46:53 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:09.908 01:46:53 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:09.908 01:46:53 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=4ac4b6c9-6b87-4fcc-bc05-db5a5a473503 00:21:09.908 01:46:53 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:09.908 01:46:53 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 4ac4b6c9-6b87-4fcc-bc05-db5a5a473503 00:21:09.908 01:46:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=4ac4b6c9-6b87-4fcc-bc05-db5a5a473503 00:21:09.908 01:46:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:09.908 01:46:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:09.908 01:46:53 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:09.908 01:46:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ac4b6c9-6b87-4fcc-bc05-db5a5a473503 00:21:10.170 01:46:53 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:10.170 { 00:21:10.170 "name": "4ac4b6c9-6b87-4fcc-bc05-db5a5a473503", 00:21:10.170 "aliases": [ 00:21:10.170 "lvs/nvme0n1p0" 00:21:10.170 ], 00:21:10.170 "product_name": "Logical Volume", 00:21:10.170 "block_size": 4096, 00:21:10.170 "num_blocks": 26476544, 00:21:10.170 "uuid": "4ac4b6c9-6b87-4fcc-bc05-db5a5a473503", 00:21:10.170 "assigned_rate_limits": { 00:21:10.170 "rw_ios_per_sec": 0, 00:21:10.170 "rw_mbytes_per_sec": 0, 00:21:10.170 "r_mbytes_per_sec": 0, 00:21:10.170 "w_mbytes_per_sec": 0 00:21:10.170 }, 00:21:10.170 "claimed": false, 00:21:10.170 "zoned": false, 00:21:10.170 "supported_io_types": { 00:21:10.170 "read": true, 00:21:10.170 "write": true, 00:21:10.170 "unmap": true, 00:21:10.170 "flush": false, 00:21:10.170 "reset": true, 00:21:10.170 "nvme_admin": false, 00:21:10.170 "nvme_io": false, 00:21:10.170 "nvme_io_md": false, 00:21:10.170 "write_zeroes": true, 00:21:10.170 "zcopy": false, 00:21:10.170 "get_zone_info": false, 00:21:10.170 "zone_management": false, 00:21:10.170 "zone_append": false, 00:21:10.170 "compare": false, 00:21:10.170 "compare_and_write": false, 00:21:10.170 "abort": false, 00:21:10.170 "seek_hole": true, 00:21:10.170 "seek_data": true, 00:21:10.170 "copy": false, 00:21:10.170 "nvme_iov_md": false 00:21:10.170 }, 00:21:10.170 "driver_specific": { 00:21:10.170 "lvol": { 00:21:10.170 "lvol_store_uuid": "342d81aa-c9f5-410e-8bbc-840dc7fcdd1c", 00:21:10.170 "base_bdev": "nvme0n1", 00:21:10.170 "thin_provision": true, 00:21:10.170 "num_allocated_clusters": 0, 00:21:10.170 "snapshot": false, 00:21:10.170 "clone": false, 00:21:10.170 "esnap_clone": false 00:21:10.170 } 00:21:10.170 } 00:21:10.170 } 00:21:10.170 ]' 00:21:10.170 01:46:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:10.171 01:46:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:10.171 01:46:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:10.171 01:46:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:10.171 01:46:53 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:10.171 01:46:53 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:10.171 01:46:53 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:10.171 01:46:53 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:10.171 01:46:53 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:10.432 01:46:54 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:10.432 01:46:54 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:10.433 01:46:54 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 4ac4b6c9-6b87-4fcc-bc05-db5a5a473503 00:21:10.433 01:46:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=4ac4b6c9-6b87-4fcc-bc05-db5a5a473503 00:21:10.433 01:46:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:10.433 01:46:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:10.433 01:46:54 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:10.433 01:46:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ac4b6c9-6b87-4fcc-bc05-db5a5a473503 00:21:10.694 01:46:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:10.694 { 00:21:10.695 "name": "4ac4b6c9-6b87-4fcc-bc05-db5a5a473503", 00:21:10.695 "aliases": [ 00:21:10.695 "lvs/nvme0n1p0" 00:21:10.695 ], 00:21:10.695 "product_name": "Logical Volume", 00:21:10.695 "block_size": 4096, 00:21:10.695 "num_blocks": 26476544, 00:21:10.695 "uuid": "4ac4b6c9-6b87-4fcc-bc05-db5a5a473503", 00:21:10.695 "assigned_rate_limits": { 00:21:10.695 "rw_ios_per_sec": 0, 00:21:10.695 "rw_mbytes_per_sec": 0, 00:21:10.695 "r_mbytes_per_sec": 0, 00:21:10.695 "w_mbytes_per_sec": 0 00:21:10.695 }, 00:21:10.695 "claimed": false, 00:21:10.695 "zoned": false, 00:21:10.695 "supported_io_types": { 00:21:10.695 "read": true, 00:21:10.695 "write": true, 00:21:10.695 "unmap": true, 00:21:10.695 "flush": false, 00:21:10.695 "reset": true, 00:21:10.695 "nvme_admin": false, 00:21:10.695 "nvme_io": false, 00:21:10.695 "nvme_io_md": false, 00:21:10.695 "write_zeroes": true, 00:21:10.695 "zcopy": false, 00:21:10.695 "get_zone_info": false, 00:21:10.695 "zone_management": false, 00:21:10.695 "zone_append": false, 00:21:10.695 "compare": false, 00:21:10.695 "compare_and_write": false, 00:21:10.695 "abort": false, 00:21:10.695 "seek_hole": true, 00:21:10.695 "seek_data": true, 00:21:10.695 "copy": false, 00:21:10.695 "nvme_iov_md": false 00:21:10.695 }, 00:21:10.695 "driver_specific": { 00:21:10.695 "lvol": { 00:21:10.695 "lvol_store_uuid": "342d81aa-c9f5-410e-8bbc-840dc7fcdd1c", 00:21:10.695 "base_bdev": "nvme0n1", 00:21:10.695 "thin_provision": true, 00:21:10.695 "num_allocated_clusters": 0, 00:21:10.695 "snapshot": false, 00:21:10.695 "clone": false, 00:21:10.695 "esnap_clone": false 00:21:10.695 } 00:21:10.695 } 00:21:10.695 } 00:21:10.695 ]' 00:21:10.695 01:46:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:10.695 01:46:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:10.695 01:46:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:10.695 01:46:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:10.695 01:46:54 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:10.695 01:46:54 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:10.695 01:46:54 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:10.695 01:46:54 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:10.956 01:46:54 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:10.956 01:46:54 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 4ac4b6c9-6b87-4fcc-bc05-db5a5a473503 00:21:10.956 01:46:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=4ac4b6c9-6b87-4fcc-bc05-db5a5a473503 00:21:10.956 01:46:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:10.956 01:46:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:10.956 01:46:54 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:10.957 01:46:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ac4b6c9-6b87-4fcc-bc05-db5a5a473503 00:21:11.219 01:46:54 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:11.219 { 00:21:11.219 "name": "4ac4b6c9-6b87-4fcc-bc05-db5a5a473503", 00:21:11.219 "aliases": [ 00:21:11.219 "lvs/nvme0n1p0" 00:21:11.219 ], 00:21:11.219 "product_name": "Logical Volume", 00:21:11.219 "block_size": 4096, 00:21:11.219 "num_blocks": 26476544, 00:21:11.219 "uuid": "4ac4b6c9-6b87-4fcc-bc05-db5a5a473503", 00:21:11.219 "assigned_rate_limits": { 00:21:11.219 "rw_ios_per_sec": 0, 00:21:11.219 "rw_mbytes_per_sec": 0, 00:21:11.219 "r_mbytes_per_sec": 0, 00:21:11.219 "w_mbytes_per_sec": 0 00:21:11.219 }, 00:21:11.219 "claimed": false, 00:21:11.219 "zoned": false, 00:21:11.219 "supported_io_types": { 00:21:11.219 "read": true, 00:21:11.219 "write": true, 00:21:11.219 "unmap": true, 00:21:11.219 "flush": false, 00:21:11.219 "reset": true, 00:21:11.219 "nvme_admin": false, 00:21:11.219 "nvme_io": false, 00:21:11.219 "nvme_io_md": false, 00:21:11.219 "write_zeroes": true, 00:21:11.219 "zcopy": false, 00:21:11.219 "get_zone_info": false, 00:21:11.219 "zone_management": false, 00:21:11.219 "zone_append": false, 00:21:11.219 "compare": false, 00:21:11.219 "compare_and_write": false, 00:21:11.219 "abort": false, 00:21:11.219 "seek_hole": true, 00:21:11.219 "seek_data": true, 00:21:11.219 "copy": false, 00:21:11.219 "nvme_iov_md": false 00:21:11.219 }, 00:21:11.219 "driver_specific": { 00:21:11.219 "lvol": { 00:21:11.219 "lvol_store_uuid": "342d81aa-c9f5-410e-8bbc-840dc7fcdd1c", 00:21:11.219 "base_bdev": "nvme0n1", 00:21:11.219 "thin_provision": true, 00:21:11.219 "num_allocated_clusters": 0, 00:21:11.219 "snapshot": false, 00:21:11.219 "clone": false, 00:21:11.219 "esnap_clone": false 00:21:11.219 } 00:21:11.219 } 00:21:11.219 } 00:21:11.219 ]' 00:21:11.219 01:46:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:11.219 01:46:55 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:11.219 01:46:55 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:11.219 01:46:55 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:11.219 01:46:55 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:11.219 01:46:55 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:11.219 01:46:55 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:11.219 01:46:55 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 4ac4b6c9-6b87-4fcc-bc05-db5a5a473503 --l2p_dram_limit 10' 00:21:11.219 01:46:55 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:11.219 01:46:55 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:11.219 01:46:55 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:11.219 01:46:55 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:11.219 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:11.219 01:46:55 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4ac4b6c9-6b87-4fcc-bc05-db5a5a473503 --l2p_dram_limit 10 -c nvc0n1p0 00:21:11.482 [2024-11-21 01:46:55.257813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.482 [2024-11-21 01:46:55.258077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:11.482 [2024-11-21 01:46:55.258110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:11.482 [2024-11-21 01:46:55.258121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.482 [2024-11-21 01:46:55.258217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.482 [2024-11-21 01:46:55.258230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:11.482 [2024-11-21 01:46:55.258244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:21:11.482 [2024-11-21 01:46:55.258253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.482 [2024-11-21 01:46:55.258286] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:11.482 [2024-11-21 01:46:55.259021] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:11.482 [2024-11-21 01:46:55.259050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.482 [2024-11-21 01:46:55.259061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:11.482 [2024-11-21 01:46:55.259073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:21:11.482 [2024-11-21 01:46:55.259082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.482 [2024-11-21 01:46:55.259121] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f555680a-895d-4d20-a79f-de599ad6b77b 00:21:11.482 [2024-11-21 01:46:55.261509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.482 [2024-11-21 01:46:55.261747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:11.482 [2024-11-21 01:46:55.261771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:11.482 [2024-11-21 01:46:55.261784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.482 [2024-11-21 01:46:55.274726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.482 [2024-11-21 01:46:55.274775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:11.482 [2024-11-21 01:46:55.274791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.821 ms 00:21:11.482 [2024-11-21 01:46:55.274802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.482 [2024-11-21 01:46:55.274910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.482 [2024-11-21 01:46:55.274923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:11.482 [2024-11-21 01:46:55.274932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:21:11.482 [2024-11-21 01:46:55.274948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.482 [2024-11-21 01:46:55.275010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.482 [2024-11-21 01:46:55.275024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:11.482 [2024-11-21 01:46:55.275033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:11.482 [2024-11-21 01:46:55.275047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.482 [2024-11-21 01:46:55.275072] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:11.482 [2024-11-21 01:46:55.280135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.482 [2024-11-21 01:46:55.280183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:11.482 [2024-11-21 01:46:55.280199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.066 ms 00:21:11.482 [2024-11-21 01:46:55.280207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.482 [2024-11-21 01:46:55.280251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.482 [2024-11-21 01:46:55.280261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:11.482 [2024-11-21 01:46:55.280272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:11.482 [2024-11-21 01:46:55.280280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.482 [2024-11-21 01:46:55.280334] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:11.482 [2024-11-21 01:46:55.280491] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:11.482 [2024-11-21 01:46:55.280513] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:11.482 [2024-11-21 01:46:55.280526] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:11.482 [2024-11-21 01:46:55.280540] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:11.482 [2024-11-21 01:46:55.280552] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:11.482 [2024-11-21 01:46:55.280564] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:11.482 [2024-11-21 01:46:55.280573] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:11.482 [2024-11-21 01:46:55.280586] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:11.482 [2024-11-21 01:46:55.280593] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:11.482 [2024-11-21 01:46:55.280604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.482 [2024-11-21 01:46:55.280640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:11.482 [2024-11-21 01:46:55.280652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:21:11.482 [2024-11-21 01:46:55.280670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.482 [2024-11-21 01:46:55.280781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.482 [2024-11-21 01:46:55.280792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:11.482 [2024-11-21 01:46:55.280805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:11.482 [2024-11-21 01:46:55.280813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.482 [2024-11-21 01:46:55.280925] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:11.482 [2024-11-21 01:46:55.280938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:11.482 [2024-11-21 01:46:55.280950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:11.482 [2024-11-21 01:46:55.280959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.482 [2024-11-21 01:46:55.280971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:11.482 [2024-11-21 01:46:55.280979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:11.482 [2024-11-21 01:46:55.280989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:11.482 [2024-11-21 01:46:55.280996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:11.482 [2024-11-21 01:46:55.281005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:11.482 [2024-11-21 01:46:55.281014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:11.482 [2024-11-21 01:46:55.281025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:11.482 [2024-11-21 01:46:55.281034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:11.482 [2024-11-21 01:46:55.281044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:11.483 [2024-11-21 01:46:55.281052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:11.483 [2024-11-21 01:46:55.281062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:11.483 [2024-11-21 01:46:55.281070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.483 [2024-11-21 01:46:55.281084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:11.483 [2024-11-21 01:46:55.281092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:11.483 [2024-11-21 01:46:55.281104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.483 [2024-11-21 01:46:55.281115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:11.483 [2024-11-21 01:46:55.281127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:11.483 [2024-11-21 01:46:55.281135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.483 [2024-11-21 01:46:55.281144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:11.483 [2024-11-21 01:46:55.281150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:11.483 [2024-11-21 01:46:55.281159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.483 [2024-11-21 01:46:55.281165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:11.483 [2024-11-21 01:46:55.281174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:11.483 [2024-11-21 01:46:55.281182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.483 [2024-11-21 01:46:55.281192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:11.483 [2024-11-21 01:46:55.281199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:11.483 [2024-11-21 01:46:55.281208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.483 [2024-11-21 01:46:55.281214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:11.483 [2024-11-21 01:46:55.281226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:11.483 [2024-11-21 01:46:55.281232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:11.483 [2024-11-21 01:46:55.281242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:11.483 [2024-11-21 01:46:55.281249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:11.483 [2024-11-21 01:46:55.281261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:11.483 [2024-11-21 01:46:55.281268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:11.483 [2024-11-21 01:46:55.281277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:11.483 [2024-11-21 01:46:55.281284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.483 [2024-11-21 01:46:55.281294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:11.483 [2024-11-21 01:46:55.281318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:11.483 [2024-11-21 01:46:55.281327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.483 [2024-11-21 01:46:55.281334] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:11.483 [2024-11-21 01:46:55.281345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:11.483 [2024-11-21 01:46:55.281354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:11.483 [2024-11-21 01:46:55.281367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.483 [2024-11-21 01:46:55.281375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:11.483 [2024-11-21 01:46:55.281387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:11.483 [2024-11-21 01:46:55.281395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:11.483 [2024-11-21 01:46:55.281405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:11.483 [2024-11-21 01:46:55.281414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:11.483 [2024-11-21 01:46:55.281424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:11.483 [2024-11-21 01:46:55.281436] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:11.483 [2024-11-21 01:46:55.281449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:11.483 [2024-11-21 01:46:55.281462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:11.483 [2024-11-21 01:46:55.281472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:11.483 [2024-11-21 01:46:55.281480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:11.483 [2024-11-21 01:46:55.281490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:11.483 [2024-11-21 01:46:55.281498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:11.483 [2024-11-21 01:46:55.281508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:11.483 [2024-11-21 01:46:55.281516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:11.483 [2024-11-21 01:46:55.281525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:11.483 [2024-11-21 01:46:55.281532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:11.483 [2024-11-21 01:46:55.281544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:11.483 [2024-11-21 01:46:55.281554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:11.483 [2024-11-21 01:46:55.281565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:11.483 [2024-11-21 01:46:55.281573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:11.483 [2024-11-21 01:46:55.281585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:11.483 [2024-11-21 01:46:55.281594] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:11.483 [2024-11-21 01:46:55.281606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:11.483 [2024-11-21 01:46:55.281637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:11.483 [2024-11-21 01:46:55.281646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:11.483 [2024-11-21 01:46:55.281654] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:11.483 [2024-11-21 01:46:55.281665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:11.483 [2024-11-21 01:46:55.281674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.483 [2024-11-21 01:46:55.281685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:11.483 [2024-11-21 01:46:55.281694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.823 ms 00:21:11.483 [2024-11-21 01:46:55.281704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.483 [2024-11-21 01:46:55.281747] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:11.483 [2024-11-21 01:46:55.281772] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:16.777 [2024-11-21 01:46:59.702983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.777 [2024-11-21 01:46:59.703083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:16.777 [2024-11-21 01:46:59.703103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4421.217 ms 00:21:16.777 [2024-11-21 01:46:59.703116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.777 [2024-11-21 01:46:59.735977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.777 [2024-11-21 01:46:59.736048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:16.777 [2024-11-21 01:46:59.736064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.610 ms 00:21:16.777 [2024-11-21 01:46:59.736077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.777 [2024-11-21 01:46:59.736226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.777 [2024-11-21 01:46:59.736241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:16.777 [2024-11-21 01:46:59.736251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:16.777 [2024-11-21 01:46:59.736265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.777 [2024-11-21 01:46:59.772315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.777 [2024-11-21 01:46:59.772371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:16.777 [2024-11-21 01:46:59.772384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.008 ms 00:21:16.777 [2024-11-21 01:46:59.772395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.777 [2024-11-21 01:46:59.772431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.777 [2024-11-21 01:46:59.772447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:16.777 [2024-11-21 01:46:59.772456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:16.777 [2024-11-21 01:46:59.772466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.777 [2024-11-21 01:46:59.773118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.777 [2024-11-21 01:46:59.773160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:16.777 [2024-11-21 01:46:59.773180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:21:16.777 [2024-11-21 01:46:59.773191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.777 [2024-11-21 01:46:59.773323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.777 [2024-11-21 01:46:59.773336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:16.777 [2024-11-21 01:46:59.773348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:21:16.777 [2024-11-21 01:46:59.773361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.777 [2024-11-21 01:46:59.791055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.777 [2024-11-21 01:46:59.791284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:16.777 [2024-11-21 01:46:59.791306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.673 ms 00:21:16.777 [2024-11-21 01:46:59.791317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.777 [2024-11-21 01:46:59.804697] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:16.777 [2024-11-21 01:46:59.808699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.777 [2024-11-21 01:46:59.808745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:16.777 [2024-11-21 01:46:59.808760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.262 ms 00:21:16.777 [2024-11-21 01:46:59.808769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.777 [2024-11-21 01:46:59.927735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.777 [2024-11-21 01:46:59.927801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:16.777 [2024-11-21 01:46:59.927823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.928 ms 00:21:16.777 [2024-11-21 01:46:59.927833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.777 [2024-11-21 01:46:59.928055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.777 [2024-11-21 01:46:59.928072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:16.777 [2024-11-21 01:46:59.928088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:21:16.777 [2024-11-21 01:46:59.928097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.777 [2024-11-21 01:46:59.954989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.777 [2024-11-21 01:46:59.955042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:16.777 [2024-11-21 01:46:59.955059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.830 ms 00:21:16.777 [2024-11-21 01:46:59.955068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.777 [2024-11-21 01:46:59.980862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.777 [2024-11-21 01:46:59.980908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:16.777 [2024-11-21 01:46:59.980924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.732 ms 00:21:16.777 [2024-11-21 01:46:59.980931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.777 [2024-11-21 01:46:59.981598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.777 [2024-11-21 01:46:59.981697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:16.777 [2024-11-21 01:46:59.981711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:21:16.777 [2024-11-21 01:46:59.981720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.777 [2024-11-21 01:47:00.068024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.777 [2024-11-21 01:47:00.068088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:16.778 [2024-11-21 01:47:00.068112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.231 ms 00:21:16.778 [2024-11-21 01:47:00.068123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.778 [2024-11-21 01:47:00.096301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.778 [2024-11-21 01:47:00.096363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:16.778 [2024-11-21 01:47:00.096381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.062 ms 00:21:16.778 [2024-11-21 01:47:00.096390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.778 [2024-11-21 01:47:00.123585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.778 [2024-11-21 01:47:00.123665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:16.778 [2024-11-21 01:47:00.123682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.131 ms 00:21:16.778 [2024-11-21 01:47:00.123690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.778 [2024-11-21 01:47:00.150803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.778 [2024-11-21 01:47:00.150856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:16.778 [2024-11-21 01:47:00.150872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.050 ms 00:21:16.778 [2024-11-21 01:47:00.150881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.778 [2024-11-21 01:47:00.150943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.778 [2024-11-21 01:47:00.150953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:16.778 [2024-11-21 01:47:00.150970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:16.778 [2024-11-21 01:47:00.150978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.778 [2024-11-21 01:47:00.151090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.778 [2024-11-21 01:47:00.151101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:16.778 [2024-11-21 01:47:00.151116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:16.778 [2024-11-21 01:47:00.151124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.778 [2024-11-21 01:47:00.152363] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4894.033 ms, result 0 00:21:16.778 { 00:21:16.778 "name": "ftl0", 00:21:16.778 "uuid": "f555680a-895d-4d20-a79f-de599ad6b77b" 00:21:16.778 } 00:21:16.778 01:47:00 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:16.778 01:47:00 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:16.778 01:47:00 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:16.778 01:47:00 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:16.778 [2024-11-21 01:47:00.595671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.778 [2024-11-21 01:47:00.595740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:16.778 [2024-11-21 01:47:00.595757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:16.778 [2024-11-21 01:47:00.595777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.778 [2024-11-21 01:47:00.595803] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:16.778 [2024-11-21 01:47:00.598921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.778 [2024-11-21 01:47:00.598969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:16.778 [2024-11-21 01:47:00.598990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.093 ms 00:21:16.778 [2024-11-21 01:47:00.598998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.778 [2024-11-21 01:47:00.599297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.778 [2024-11-21 01:47:00.599309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:16.778 [2024-11-21 01:47:00.599324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:21:16.778 [2024-11-21 01:47:00.599332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.778 [2024-11-21 01:47:00.602597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.778 [2024-11-21 01:47:00.602804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:16.778 [2024-11-21 01:47:00.602828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.247 ms 00:21:16.778 [2024-11-21 01:47:00.602837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.778 [2024-11-21 01:47:00.609155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.778 [2024-11-21 01:47:00.609200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:16.778 [2024-11-21 01:47:00.609219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.281 ms 00:21:16.778 [2024-11-21 01:47:00.609227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.778 [2024-11-21 01:47:00.636298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.778 [2024-11-21 01:47:00.636351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:16.778 [2024-11-21 01:47:00.636368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.950 ms 00:21:16.778 [2024-11-21 01:47:00.636376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.778 [2024-11-21 01:47:00.654210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.778 [2024-11-21 01:47:00.654260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:16.778 [2024-11-21 01:47:00.654278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.768 ms 00:21:16.778 [2024-11-21 01:47:00.654286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.778 [2024-11-21 01:47:00.654474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.778 [2024-11-21 01:47:00.654486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:16.778 [2024-11-21 01:47:00.654499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:21:16.778 [2024-11-21 01:47:00.654507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.778 [2024-11-21 01:47:00.680326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.778 [2024-11-21 01:47:00.680520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:16.778 [2024-11-21 01:47:00.680549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.793 ms 00:21:16.778 [2024-11-21 01:47:00.680557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:16.778 [2024-11-21 01:47:00.706188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:16.778 [2024-11-21 01:47:00.706239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:16.778 [2024-11-21 01:47:00.706254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.579 ms 00:21:16.778 [2024-11-21 01:47:00.706262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.041 [2024-11-21 01:47:00.730985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.041 [2024-11-21 01:47:00.731036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:17.041 [2024-11-21 01:47:00.731052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.665 ms 00:21:17.041 [2024-11-21 01:47:00.731059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.041 [2024-11-21 01:47:00.756309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.041 [2024-11-21 01:47:00.756361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:17.041 [2024-11-21 01:47:00.756376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.143 ms 00:21:17.041 [2024-11-21 01:47:00.756383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.041 [2024-11-21 01:47:00.756436] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:17.041 [2024-11-21 01:47:00.756452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:17.041 [2024-11-21 01:47:00.756759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.756998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:17.042 [2024-11-21 01:47:00.757431] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:17.042 [2024-11-21 01:47:00.757444] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f555680a-895d-4d20-a79f-de599ad6b77b 00:21:17.042 [2024-11-21 01:47:00.757452] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:17.042 [2024-11-21 01:47:00.757465] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:17.042 [2024-11-21 01:47:00.757472] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:17.042 [2024-11-21 01:47:00.757487] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:17.042 [2024-11-21 01:47:00.757494] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:17.042 [2024-11-21 01:47:00.757504] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:17.042 [2024-11-21 01:47:00.757511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:17.042 [2024-11-21 01:47:00.757520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:17.042 [2024-11-21 01:47:00.757527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:17.042 [2024-11-21 01:47:00.757537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.042 [2024-11-21 01:47:00.757545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:17.042 [2024-11-21 01:47:00.757557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:21:17.042 [2024-11-21 01:47:00.757564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.042 [2024-11-21 01:47:00.771449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.042 [2024-11-21 01:47:00.771493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:17.042 [2024-11-21 01:47:00.771507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.832 ms 00:21:17.042 [2024-11-21 01:47:00.771515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.042 [2024-11-21 01:47:00.771958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.042 [2024-11-21 01:47:00.771977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:17.042 [2024-11-21 01:47:00.771989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:21:17.042 [2024-11-21 01:47:00.771999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.042 [2024-11-21 01:47:00.819023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.043 [2024-11-21 01:47:00.819075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:17.043 [2024-11-21 01:47:00.819090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.043 [2024-11-21 01:47:00.819099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.043 [2024-11-21 01:47:00.819173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.043 [2024-11-21 01:47:00.819182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:17.043 [2024-11-21 01:47:00.819193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.043 [2024-11-21 01:47:00.819203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.043 [2024-11-21 01:47:00.819310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.043 [2024-11-21 01:47:00.819321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:17.043 [2024-11-21 01:47:00.819333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.043 [2024-11-21 01:47:00.819340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.043 [2024-11-21 01:47:00.819363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.043 [2024-11-21 01:47:00.819372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:17.043 [2024-11-21 01:47:00.819381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.043 [2024-11-21 01:47:00.819389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.043 [2024-11-21 01:47:00.905319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.043 [2024-11-21 01:47:00.905380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:17.043 [2024-11-21 01:47:00.905397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.043 [2024-11-21 01:47:00.905407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.043 [2024-11-21 01:47:00.975198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.043 [2024-11-21 01:47:00.975255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:17.043 [2024-11-21 01:47:00.975271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.043 [2024-11-21 01:47:00.975284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.043 [2024-11-21 01:47:00.975380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.043 [2024-11-21 01:47:00.975390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:17.043 [2024-11-21 01:47:00.975402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.043 [2024-11-21 01:47:00.975410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.043 [2024-11-21 01:47:00.975483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.043 [2024-11-21 01:47:00.975493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:17.043 [2024-11-21 01:47:00.975504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.043 [2024-11-21 01:47:00.975512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.043 [2024-11-21 01:47:00.975656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.043 [2024-11-21 01:47:00.975668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:17.043 [2024-11-21 01:47:00.975679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.043 [2024-11-21 01:47:00.975687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.043 [2024-11-21 01:47:00.975729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.043 [2024-11-21 01:47:00.975739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:17.043 [2024-11-21 01:47:00.975750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.043 [2024-11-21 01:47:00.975758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.043 [2024-11-21 01:47:00.975804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.043 [2024-11-21 01:47:00.975816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:17.043 [2024-11-21 01:47:00.975826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.043 [2024-11-21 01:47:00.975836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.043 [2024-11-21 01:47:00.975891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:17.043 [2024-11-21 01:47:00.975901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:17.043 [2024-11-21 01:47:00.975912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:17.043 [2024-11-21 01:47:00.975921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.043 [2024-11-21 01:47:00.976075] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 380.359 ms, result 0 00:21:17.043 true 00:21:17.304 01:47:01 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77200 00:21:17.304 01:47:01 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77200 ']' 00:21:17.304 01:47:01 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77200 00:21:17.304 01:47:01 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:21:17.304 01:47:01 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.304 01:47:01 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77200 00:21:17.304 killing process with pid 77200 00:21:17.304 01:47:01 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:17.304 01:47:01 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:17.304 01:47:01 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77200' 00:21:17.304 01:47:01 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77200 00:21:17.304 01:47:01 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77200 00:21:21.519 01:47:05 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:25.746 262144+0 records in 00:21:25.746 262144+0 records out 00:21:25.746 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.73861 s, 287 MB/s 00:21:25.746 01:47:08 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:27.133 01:47:10 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:27.133 [2024-11-21 01:47:10.949460] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:21:27.133 [2024-11-21 01:47:10.949550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77436 ] 00:21:27.393 [2024-11-21 01:47:11.103430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.393 [2024-11-21 01:47:11.219706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.655 [2024-11-21 01:47:11.508716] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:27.655 [2024-11-21 01:47:11.509017] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:27.918 [2024-11-21 01:47:11.669843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.918 [2024-11-21 01:47:11.669903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:27.918 [2024-11-21 01:47:11.669926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:27.918 [2024-11-21 01:47:11.669936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.918 [2024-11-21 01:47:11.669990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.918 [2024-11-21 01:47:11.670002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:27.918 [2024-11-21 01:47:11.670013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:27.918 [2024-11-21 01:47:11.670021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.918 [2024-11-21 01:47:11.670042] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:27.918 [2024-11-21 01:47:11.670779] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:27.918 [2024-11-21 01:47:11.670799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.918 [2024-11-21 01:47:11.670807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:27.918 [2024-11-21 01:47:11.670818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:21:27.918 [2024-11-21 01:47:11.670826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.918 [2024-11-21 01:47:11.672528] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:27.918 [2024-11-21 01:47:11.686656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.918 [2024-11-21 01:47:11.686703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:27.918 [2024-11-21 01:47:11.686717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.129 ms 00:21:27.918 [2024-11-21 01:47:11.686725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.918 [2024-11-21 01:47:11.686804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.918 [2024-11-21 01:47:11.686815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:27.918 [2024-11-21 01:47:11.686824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:27.918 [2024-11-21 01:47:11.686832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.918 [2024-11-21 01:47:11.694665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.918 [2024-11-21 01:47:11.694703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:27.918 [2024-11-21 01:47:11.694722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.757 ms 00:21:27.918 [2024-11-21 01:47:11.694730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.918 [2024-11-21 01:47:11.694810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.918 [2024-11-21 01:47:11.694820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:27.918 [2024-11-21 01:47:11.694829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:27.918 [2024-11-21 01:47:11.694837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.918 [2024-11-21 01:47:11.694879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.918 [2024-11-21 01:47:11.694890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:27.918 [2024-11-21 01:47:11.694898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:27.918 [2024-11-21 01:47:11.694905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.918 [2024-11-21 01:47:11.694929] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:27.918 [2024-11-21 01:47:11.698861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.918 [2024-11-21 01:47:11.698898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:27.918 [2024-11-21 01:47:11.698909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.938 ms 00:21:27.918 [2024-11-21 01:47:11.698920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.918 [2024-11-21 01:47:11.698955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.918 [2024-11-21 01:47:11.698963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:27.918 [2024-11-21 01:47:11.698973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:27.918 [2024-11-21 01:47:11.698980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.918 [2024-11-21 01:47:11.699031] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:27.918 [2024-11-21 01:47:11.699055] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:27.918 [2024-11-21 01:47:11.699091] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:27.918 [2024-11-21 01:47:11.699111] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:27.918 [2024-11-21 01:47:11.699216] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:27.918 [2024-11-21 01:47:11.699227] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:27.919 [2024-11-21 01:47:11.699238] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:27.919 [2024-11-21 01:47:11.699249] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:27.919 [2024-11-21 01:47:11.699258] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:27.919 [2024-11-21 01:47:11.699267] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:27.919 [2024-11-21 01:47:11.699274] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:27.919 [2024-11-21 01:47:11.699282] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:27.919 [2024-11-21 01:47:11.699290] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:27.919 [2024-11-21 01:47:11.699301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.919 [2024-11-21 01:47:11.699309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:27.919 [2024-11-21 01:47:11.699317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:21:27.919 [2024-11-21 01:47:11.699324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.919 [2024-11-21 01:47:11.699406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.919 [2024-11-21 01:47:11.699415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:27.919 [2024-11-21 01:47:11.699423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:27.919 [2024-11-21 01:47:11.699431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.919 [2024-11-21 01:47:11.699535] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:27.919 [2024-11-21 01:47:11.699548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:27.919 [2024-11-21 01:47:11.699557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:27.919 [2024-11-21 01:47:11.699565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.919 [2024-11-21 01:47:11.699573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:27.919 [2024-11-21 01:47:11.699580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:27.919 [2024-11-21 01:47:11.699587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:27.919 [2024-11-21 01:47:11.699596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:27.919 [2024-11-21 01:47:11.699603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:27.919 [2024-11-21 01:47:11.699634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:27.919 [2024-11-21 01:47:11.699643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:27.919 [2024-11-21 01:47:11.699650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:27.919 [2024-11-21 01:47:11.699659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:27.919 [2024-11-21 01:47:11.699667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:27.919 [2024-11-21 01:47:11.699675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:27.919 [2024-11-21 01:47:11.699688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.919 [2024-11-21 01:47:11.699696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:27.919 [2024-11-21 01:47:11.699703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:27.919 [2024-11-21 01:47:11.699710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.919 [2024-11-21 01:47:11.699717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:27.919 [2024-11-21 01:47:11.699724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:27.919 [2024-11-21 01:47:11.699731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:27.919 [2024-11-21 01:47:11.699738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:27.919 [2024-11-21 01:47:11.699745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:27.919 [2024-11-21 01:47:11.699751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:27.919 [2024-11-21 01:47:11.699758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:27.919 [2024-11-21 01:47:11.699765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:27.919 [2024-11-21 01:47:11.699770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:27.919 [2024-11-21 01:47:11.699777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:27.919 [2024-11-21 01:47:11.699784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:27.919 [2024-11-21 01:47:11.699791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:27.919 [2024-11-21 01:47:11.699798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:27.919 [2024-11-21 01:47:11.699807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:27.919 [2024-11-21 01:47:11.699813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:27.919 [2024-11-21 01:47:11.699820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:27.919 [2024-11-21 01:47:11.699827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:27.919 [2024-11-21 01:47:11.699833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:27.919 [2024-11-21 01:47:11.699840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:27.919 [2024-11-21 01:47:11.699847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:27.919 [2024-11-21 01:47:11.699853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.919 [2024-11-21 01:47:11.699860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:27.919 [2024-11-21 01:47:11.699866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:27.919 [2024-11-21 01:47:11.699873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.919 [2024-11-21 01:47:11.699879] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:27.919 [2024-11-21 01:47:11.699888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:27.919 [2024-11-21 01:47:11.699896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:27.919 [2024-11-21 01:47:11.699904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:27.919 [2024-11-21 01:47:11.699912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:27.919 [2024-11-21 01:47:11.699919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:27.919 [2024-11-21 01:47:11.699926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:27.919 [2024-11-21 01:47:11.699933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:27.919 [2024-11-21 01:47:11.699939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:27.919 [2024-11-21 01:47:11.699946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:27.919 [2024-11-21 01:47:11.699955] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:27.919 [2024-11-21 01:47:11.699964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:27.919 [2024-11-21 01:47:11.699973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:27.919 [2024-11-21 01:47:11.699981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:27.919 [2024-11-21 01:47:11.699988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:27.919 [2024-11-21 01:47:11.699994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:27.919 [2024-11-21 01:47:11.700002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:27.919 [2024-11-21 01:47:11.700009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:27.919 [2024-11-21 01:47:11.700016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:27.919 [2024-11-21 01:47:11.700022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:27.919 [2024-11-21 01:47:11.700029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:27.919 [2024-11-21 01:47:11.700036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:27.919 [2024-11-21 01:47:11.700044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:27.919 [2024-11-21 01:47:11.700051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:27.919 [2024-11-21 01:47:11.700058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:27.919 [2024-11-21 01:47:11.700065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:27.919 [2024-11-21 01:47:11.700072] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:27.919 [2024-11-21 01:47:11.700083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:27.919 [2024-11-21 01:47:11.700091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:27.919 [2024-11-21 01:47:11.700098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:27.919 [2024-11-21 01:47:11.700105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:27.919 [2024-11-21 01:47:11.700113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:27.919 [2024-11-21 01:47:11.700120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.919 [2024-11-21 01:47:11.700128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:27.919 [2024-11-21 01:47:11.700136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:21:27.919 [2024-11-21 01:47:11.700143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.919 [2024-11-21 01:47:11.731537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.919 [2024-11-21 01:47:11.731588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:27.919 [2024-11-21 01:47:11.731600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.349 ms 00:21:27.919 [2024-11-21 01:47:11.731609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.919 [2024-11-21 01:47:11.731727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.920 [2024-11-21 01:47:11.731736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:27.920 [2024-11-21 01:47:11.731746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:21:27.920 [2024-11-21 01:47:11.731753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.920 [2024-11-21 01:47:11.777558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.920 [2024-11-21 01:47:11.777632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:27.920 [2024-11-21 01:47:11.777645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.744 ms 00:21:27.920 [2024-11-21 01:47:11.777655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.920 [2024-11-21 01:47:11.777704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.920 [2024-11-21 01:47:11.777714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:27.920 [2024-11-21 01:47:11.777725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:27.920 [2024-11-21 01:47:11.777737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.920 [2024-11-21 01:47:11.778337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.920 [2024-11-21 01:47:11.778367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:27.920 [2024-11-21 01:47:11.778378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:21:27.920 [2024-11-21 01:47:11.778386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.920 [2024-11-21 01:47:11.778540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.920 [2024-11-21 01:47:11.778551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:27.920 [2024-11-21 01:47:11.778560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:21:27.920 [2024-11-21 01:47:11.778573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.920 [2024-11-21 01:47:11.794190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.920 [2024-11-21 01:47:11.794235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:27.920 [2024-11-21 01:47:11.794250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.596 ms 00:21:27.920 [2024-11-21 01:47:11.794260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.920 [2024-11-21 01:47:11.808686] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:27.920 [2024-11-21 01:47:11.808890] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:27.920 [2024-11-21 01:47:11.808910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.920 [2024-11-21 01:47:11.808919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:27.920 [2024-11-21 01:47:11.808929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.537 ms 00:21:27.920 [2024-11-21 01:47:11.808937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.920 [2024-11-21 01:47:11.834565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.920 [2024-11-21 01:47:11.834628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:27.920 [2024-11-21 01:47:11.834647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.491 ms 00:21:27.920 [2024-11-21 01:47:11.834655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.920 [2024-11-21 01:47:11.847709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.920 [2024-11-21 01:47:11.847904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:27.920 [2024-11-21 01:47:11.847924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.001 ms 00:21:27.920 [2024-11-21 01:47:11.847933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.920 [2024-11-21 01:47:11.860919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.920 [2024-11-21 01:47:11.860974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:27.920 [2024-11-21 01:47:11.860988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.701 ms 00:21:27.920 [2024-11-21 01:47:11.860996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.920 [2024-11-21 01:47:11.861686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.920 [2024-11-21 01:47:11.861713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:27.920 [2024-11-21 01:47:11.861725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:21:27.920 [2024-11-21 01:47:11.861733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.181 [2024-11-21 01:47:11.926277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.181 [2024-11-21 01:47:11.926340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:28.181 [2024-11-21 01:47:11.926355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.520 ms 00:21:28.181 [2024-11-21 01:47:11.926372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.181 [2024-11-21 01:47:11.937772] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:28.181 [2024-11-21 01:47:11.940805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.181 [2024-11-21 01:47:11.940846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:28.181 [2024-11-21 01:47:11.940859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.376 ms 00:21:28.181 [2024-11-21 01:47:11.940868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.181 [2024-11-21 01:47:11.940954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.181 [2024-11-21 01:47:11.940965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:28.181 [2024-11-21 01:47:11.940975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:28.181 [2024-11-21 01:47:11.940983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.181 [2024-11-21 01:47:11.941058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.181 [2024-11-21 01:47:11.941069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:28.181 [2024-11-21 01:47:11.941078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:28.181 [2024-11-21 01:47:11.941086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.181 [2024-11-21 01:47:11.941107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.181 [2024-11-21 01:47:11.941116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:28.182 [2024-11-21 01:47:11.941124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:28.182 [2024-11-21 01:47:11.941133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.182 [2024-11-21 01:47:11.941168] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:28.182 [2024-11-21 01:47:11.941179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.182 [2024-11-21 01:47:11.941190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:28.182 [2024-11-21 01:47:11.941199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:28.182 [2024-11-21 01:47:11.941206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.182 [2024-11-21 01:47:11.967468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.182 [2024-11-21 01:47:11.967650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:28.182 [2024-11-21 01:47:11.967720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.243 ms 00:21:28.182 [2024-11-21 01:47:11.967746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.182 [2024-11-21 01:47:11.967845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.182 [2024-11-21 01:47:11.967871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:28.182 [2024-11-21 01:47:11.967893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:21:28.182 [2024-11-21 01:47:11.967913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.182 [2024-11-21 01:47:11.969737] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.367 ms, result 0 00:21:29.125  [2024-11-21T01:47:14.025Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-21T01:47:15.412Z] Copying: 35/1024 [MB] (17 MBps) [2024-11-21T01:47:15.986Z] Copying: 86/1024 [MB] (51 MBps) [2024-11-21T01:47:17.373Z] Copying: 102/1024 [MB] (16 MBps) [2024-11-21T01:47:18.018Z] Copying: 120/1024 [MB] (17 MBps) [2024-11-21T01:47:19.396Z] Copying: 138/1024 [MB] (18 MBps) [2024-11-21T01:47:20.335Z] Copying: 155/1024 [MB] (16 MBps) [2024-11-21T01:47:21.271Z] Copying: 171/1024 [MB] (16 MBps) [2024-11-21T01:47:22.207Z] Copying: 188/1024 [MB] (16 MBps) [2024-11-21T01:47:23.142Z] Copying: 199/1024 [MB] (11 MBps) [2024-11-21T01:47:24.077Z] Copying: 211/1024 [MB] (11 MBps) [2024-11-21T01:47:25.011Z] Copying: 225/1024 [MB] (13 MBps) [2024-11-21T01:47:26.395Z] Copying: 239/1024 [MB] (14 MBps) [2024-11-21T01:47:27.337Z] Copying: 259/1024 [MB] (19 MBps) [2024-11-21T01:47:28.281Z] Copying: 276/1024 [MB] (16 MBps) [2024-11-21T01:47:29.225Z] Copying: 295/1024 [MB] (19 MBps) [2024-11-21T01:47:30.157Z] Copying: 315/1024 [MB] (19 MBps) [2024-11-21T01:47:31.097Z] Copying: 327/1024 [MB] (12 MBps) [2024-11-21T01:47:32.039Z] Copying: 338/1024 [MB] (11 MBps) [2024-11-21T01:47:33.422Z] Copying: 350/1024 [MB] (11 MBps) [2024-11-21T01:47:33.988Z] Copying: 360/1024 [MB] (10 MBps) [2024-11-21T01:47:35.368Z] Copying: 372/1024 [MB] (12 MBps) [2024-11-21T01:47:36.302Z] Copying: 383/1024 [MB] (10 MBps) [2024-11-21T01:47:37.237Z] Copying: 395/1024 [MB] (12 MBps) [2024-11-21T01:47:38.172Z] Copying: 407/1024 [MB] (12 MBps) [2024-11-21T01:47:39.108Z] Copying: 419/1024 [MB] (12 MBps) [2024-11-21T01:47:40.058Z] Copying: 432/1024 [MB] (12 MBps) [2024-11-21T01:47:40.995Z] Copying: 445/1024 [MB] (12 MBps) [2024-11-21T01:47:42.378Z] Copying: 456/1024 [MB] (10 MBps) [2024-11-21T01:47:43.319Z] Copying: 467/1024 [MB] (11 MBps) [2024-11-21T01:47:44.261Z] Copying: 482/1024 [MB] (15 MBps) [2024-11-21T01:47:45.203Z] Copying: 523/1024 [MB] (40 MBps) [2024-11-21T01:47:46.146Z] Copying: 534/1024 [MB] (11 MBps) [2024-11-21T01:47:47.090Z] Copying: 559/1024 [MB] (24 MBps) [2024-11-21T01:47:48.034Z] Copying: 588/1024 [MB] (29 MBps) [2024-11-21T01:47:49.053Z] Copying: 605/1024 [MB] (17 MBps) [2024-11-21T01:47:50.000Z] Copying: 625/1024 [MB] (19 MBps) [2024-11-21T01:47:51.388Z] Copying: 647/1024 [MB] (22 MBps) [2024-11-21T01:47:52.330Z] Copying: 664/1024 [MB] (17 MBps) [2024-11-21T01:47:53.273Z] Copying: 684/1024 [MB] (19 MBps) [2024-11-21T01:47:54.216Z] Copying: 704/1024 [MB] (19 MBps) [2024-11-21T01:47:55.157Z] Copying: 727/1024 [MB] (23 MBps) [2024-11-21T01:47:56.100Z] Copying: 750/1024 [MB] (22 MBps) [2024-11-21T01:47:57.042Z] Copying: 767/1024 [MB] (17 MBps) [2024-11-21T01:47:58.426Z] Copying: 787/1024 [MB] (19 MBps) [2024-11-21T01:47:58.996Z] Copying: 804/1024 [MB] (16 MBps) [2024-11-21T01:48:00.383Z] Copying: 815/1024 [MB] (11 MBps) [2024-11-21T01:48:01.326Z] Copying: 826/1024 [MB] (11 MBps) [2024-11-21T01:48:02.268Z] Copying: 837/1024 [MB] (10 MBps) [2024-11-21T01:48:03.210Z] Copying: 868/1024 [MB] (30 MBps) [2024-11-21T01:48:04.151Z] Copying: 888/1024 [MB] (19 MBps) [2024-11-21T01:48:05.092Z] Copying: 903/1024 [MB] (15 MBps) [2024-11-21T01:48:06.034Z] Copying: 919/1024 [MB] (16 MBps) [2024-11-21T01:48:07.426Z] Copying: 940/1024 [MB] (20 MBps) [2024-11-21T01:48:08.000Z] Copying: 960/1024 [MB] (19 MBps) [2024-11-21T01:48:09.386Z] Copying: 979/1024 [MB] (18 MBps) [2024-11-21T01:48:10.331Z] Copying: 994/1024 [MB] (15 MBps) [2024-11-21T01:48:11.277Z] Copying: 1005/1024 [MB] (11 MBps) [2024-11-21T01:48:11.277Z] Copying: 1022/1024 [MB] (17 MBps) [2024-11-21T01:48:11.277Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-21 01:48:11.070384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.320 [2024-11-21 01:48:11.070443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:27.320 [2024-11-21 01:48:11.070459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:27.320 [2024-11-21 01:48:11.070468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.320 [2024-11-21 01:48:11.070491] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:27.320 [2024-11-21 01:48:11.073701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.320 [2024-11-21 01:48:11.073740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:27.320 [2024-11-21 01:48:11.073752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.193 ms 00:22:27.320 [2024-11-21 01:48:11.073762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.320 [2024-11-21 01:48:11.076724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.320 [2024-11-21 01:48:11.076768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:27.320 [2024-11-21 01:48:11.076779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.925 ms 00:22:27.320 [2024-11-21 01:48:11.076788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.320 [2024-11-21 01:48:11.095433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.320 [2024-11-21 01:48:11.095483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:27.320 [2024-11-21 01:48:11.095495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.629 ms 00:22:27.320 [2024-11-21 01:48:11.095503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.320 [2024-11-21 01:48:11.101776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.320 [2024-11-21 01:48:11.101832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:27.320 [2024-11-21 01:48:11.101842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.230 ms 00:22:27.320 [2024-11-21 01:48:11.101850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.320 [2024-11-21 01:48:11.128353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.320 [2024-11-21 01:48:11.128398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:27.320 [2024-11-21 01:48:11.128410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.447 ms 00:22:27.320 [2024-11-21 01:48:11.128418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.320 [2024-11-21 01:48:11.144472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.320 [2024-11-21 01:48:11.144516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:27.320 [2024-11-21 01:48:11.144529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.009 ms 00:22:27.320 [2024-11-21 01:48:11.144537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.320 [2024-11-21 01:48:11.144704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.320 [2024-11-21 01:48:11.144717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:27.320 [2024-11-21 01:48:11.144734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:22:27.320 [2024-11-21 01:48:11.144742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.320 [2024-11-21 01:48:11.170397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.320 [2024-11-21 01:48:11.170451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:27.320 [2024-11-21 01:48:11.170462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.639 ms 00:22:27.320 [2024-11-21 01:48:11.170469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.320 [2024-11-21 01:48:11.196007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.320 [2024-11-21 01:48:11.196051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:27.320 [2024-11-21 01:48:11.196075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.493 ms 00:22:27.320 [2024-11-21 01:48:11.196082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.320 [2024-11-21 01:48:11.221129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.320 [2024-11-21 01:48:11.221172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:27.320 [2024-11-21 01:48:11.221183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.002 ms 00:22:27.320 [2024-11-21 01:48:11.221189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.320 [2024-11-21 01:48:11.246034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.320 [2024-11-21 01:48:11.246078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:27.320 [2024-11-21 01:48:11.246088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.773 ms 00:22:27.320 [2024-11-21 01:48:11.246096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.320 [2024-11-21 01:48:11.246138] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:27.320 [2024-11-21 01:48:11.246153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:27.320 [2024-11-21 01:48:11.246163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:27.320 [2024-11-21 01:48:11.246171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:27.320 [2024-11-21 01:48:11.246179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:27.320 [2024-11-21 01:48:11.246186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:27.320 [2024-11-21 01:48:11.246194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:27.320 [2024-11-21 01:48:11.246201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:27.320 [2024-11-21 01:48:11.246208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:27.320 [2024-11-21 01:48:11.246216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:27.320 [2024-11-21 01:48:11.246223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:27.320 [2024-11-21 01:48:11.246232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:27.320 [2024-11-21 01:48:11.246239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:27.320 [2024-11-21 01:48:11.246247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:27.320 [2024-11-21 01:48:11.246254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:27.320 [2024-11-21 01:48:11.246261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:27.320 [2024-11-21 01:48:11.246269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:27.320 [2024-11-21 01:48:11.246276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:27.320 [2024-11-21 01:48:11.246284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:27.321 [2024-11-21 01:48:11.246947] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:27.321 [2024-11-21 01:48:11.246970] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f555680a-895d-4d20-a79f-de599ad6b77b 00:22:27.321 [2024-11-21 01:48:11.246978] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:27.321 [2024-11-21 01:48:11.246988] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:27.321 [2024-11-21 01:48:11.246995] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:27.321 [2024-11-21 01:48:11.247003] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:27.321 [2024-11-21 01:48:11.247011] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:27.321 [2024-11-21 01:48:11.247019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:27.321 [2024-11-21 01:48:11.247026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:27.322 [2024-11-21 01:48:11.247040] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:27.322 [2024-11-21 01:48:11.247046] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:27.322 [2024-11-21 01:48:11.247053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.322 [2024-11-21 01:48:11.247061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:27.322 [2024-11-21 01:48:11.247072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.916 ms 00:22:27.322 [2024-11-21 01:48:11.247080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.322 [2024-11-21 01:48:11.260483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.322 [2024-11-21 01:48:11.260525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:27.322 [2024-11-21 01:48:11.260536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.383 ms 00:22:27.322 [2024-11-21 01:48:11.260545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.322 [2024-11-21 01:48:11.260990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.322 [2024-11-21 01:48:11.261028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:27.322 [2024-11-21 01:48:11.261038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:22:27.322 [2024-11-21 01:48:11.261046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.584 [2024-11-21 01:48:11.297631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.584 [2024-11-21 01:48:11.297679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:27.584 [2024-11-21 01:48:11.297691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.584 [2024-11-21 01:48:11.297700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.584 [2024-11-21 01:48:11.297765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.584 [2024-11-21 01:48:11.297775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:27.584 [2024-11-21 01:48:11.297785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.584 [2024-11-21 01:48:11.297794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.584 [2024-11-21 01:48:11.297883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.584 [2024-11-21 01:48:11.297895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:27.584 [2024-11-21 01:48:11.297905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.584 [2024-11-21 01:48:11.297914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.584 [2024-11-21 01:48:11.297931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.584 [2024-11-21 01:48:11.297939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:27.584 [2024-11-21 01:48:11.297947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.584 [2024-11-21 01:48:11.297954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.584 [2024-11-21 01:48:11.381934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.584 [2024-11-21 01:48:11.381990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:27.584 [2024-11-21 01:48:11.382004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.584 [2024-11-21 01:48:11.382012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.584 [2024-11-21 01:48:11.450401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.584 [2024-11-21 01:48:11.450460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:27.584 [2024-11-21 01:48:11.450473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.584 [2024-11-21 01:48:11.450482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.584 [2024-11-21 01:48:11.450544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.584 [2024-11-21 01:48:11.450560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:27.584 [2024-11-21 01:48:11.450569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.584 [2024-11-21 01:48:11.450578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.584 [2024-11-21 01:48:11.450664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.584 [2024-11-21 01:48:11.450676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:27.584 [2024-11-21 01:48:11.450685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.584 [2024-11-21 01:48:11.450693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.584 [2024-11-21 01:48:11.450793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.584 [2024-11-21 01:48:11.450806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:27.584 [2024-11-21 01:48:11.450816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.584 [2024-11-21 01:48:11.450824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.584 [2024-11-21 01:48:11.450858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.584 [2024-11-21 01:48:11.450868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:27.584 [2024-11-21 01:48:11.450877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.584 [2024-11-21 01:48:11.450885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.584 [2024-11-21 01:48:11.450925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.584 [2024-11-21 01:48:11.450935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:27.584 [2024-11-21 01:48:11.450948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.584 [2024-11-21 01:48:11.450957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.584 [2024-11-21 01:48:11.451003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.584 [2024-11-21 01:48:11.451014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:27.584 [2024-11-21 01:48:11.451023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.584 [2024-11-21 01:48:11.451031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.584 [2024-11-21 01:48:11.451164] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 380.740 ms, result 0 00:22:28.530 00:22:28.530 00:22:28.530 01:48:12 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:28.791 [2024-11-21 01:48:12.523714] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:22:28.791 [2024-11-21 01:48:12.523861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78073 ] 00:22:28.791 [2024-11-21 01:48:12.688034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.053 [2024-11-21 01:48:12.803884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.315 [2024-11-21 01:48:13.093930] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:29.315 [2024-11-21 01:48:13.094209] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:29.315 [2024-11-21 01:48:13.255223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.315 [2024-11-21 01:48:13.255318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:29.315 [2024-11-21 01:48:13.255344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:29.315 [2024-11-21 01:48:13.255354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.315 [2024-11-21 01:48:13.255419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.315 [2024-11-21 01:48:13.255431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:29.315 [2024-11-21 01:48:13.255444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:29.315 [2024-11-21 01:48:13.255453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.315 [2024-11-21 01:48:13.255477] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:29.315 [2024-11-21 01:48:13.256308] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:29.315 [2024-11-21 01:48:13.256343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.315 [2024-11-21 01:48:13.256354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:29.315 [2024-11-21 01:48:13.256364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.872 ms 00:22:29.315 [2024-11-21 01:48:13.256373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.315 [2024-11-21 01:48:13.258710] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:29.578 [2024-11-21 01:48:13.274168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.578 [2024-11-21 01:48:13.274444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:29.578 [2024-11-21 01:48:13.274467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.459 ms 00:22:29.578 [2024-11-21 01:48:13.274478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.578 [2024-11-21 01:48:13.274695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.578 [2024-11-21 01:48:13.274725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:29.578 [2024-11-21 01:48:13.274736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:29.578 [2024-11-21 01:48:13.274745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.578 [2024-11-21 01:48:13.286288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.578 [2024-11-21 01:48:13.286332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:29.578 [2024-11-21 01:48:13.286345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.456 ms 00:22:29.578 [2024-11-21 01:48:13.286354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.578 [2024-11-21 01:48:13.286449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.578 [2024-11-21 01:48:13.286460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:29.578 [2024-11-21 01:48:13.286470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:29.578 [2024-11-21 01:48:13.286479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.578 [2024-11-21 01:48:13.286539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.578 [2024-11-21 01:48:13.286553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:29.578 [2024-11-21 01:48:13.286563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:29.578 [2024-11-21 01:48:13.286572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.578 [2024-11-21 01:48:13.286599] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:29.578 [2024-11-21 01:48:13.291162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.578 [2024-11-21 01:48:13.291202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:29.578 [2024-11-21 01:48:13.291214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.571 ms 00:22:29.578 [2024-11-21 01:48:13.291228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.578 [2024-11-21 01:48:13.291267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.578 [2024-11-21 01:48:13.291277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:29.578 [2024-11-21 01:48:13.291288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:29.578 [2024-11-21 01:48:13.291296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.578 [2024-11-21 01:48:13.291334] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:29.578 [2024-11-21 01:48:13.291361] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:29.578 [2024-11-21 01:48:13.291403] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:29.578 [2024-11-21 01:48:13.291426] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:29.578 [2024-11-21 01:48:13.291539] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:29.578 [2024-11-21 01:48:13.291551] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:29.578 [2024-11-21 01:48:13.291563] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:29.578 [2024-11-21 01:48:13.291575] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:29.578 [2024-11-21 01:48:13.291586] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:29.578 [2024-11-21 01:48:13.291595] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:29.578 [2024-11-21 01:48:13.291604] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:29.578 [2024-11-21 01:48:13.291636] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:29.578 [2024-11-21 01:48:13.291645] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:29.578 [2024-11-21 01:48:13.291659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.578 [2024-11-21 01:48:13.291669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:29.578 [2024-11-21 01:48:13.291678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:22:29.578 [2024-11-21 01:48:13.291687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.578 [2024-11-21 01:48:13.291774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.578 [2024-11-21 01:48:13.291784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:29.578 [2024-11-21 01:48:13.291794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:29.578 [2024-11-21 01:48:13.291802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.578 [2024-11-21 01:48:13.291910] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:29.578 [2024-11-21 01:48:13.291925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:29.578 [2024-11-21 01:48:13.291935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:29.578 [2024-11-21 01:48:13.291943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.578 [2024-11-21 01:48:13.291952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:29.578 [2024-11-21 01:48:13.291960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:29.578 [2024-11-21 01:48:13.291969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:29.578 [2024-11-21 01:48:13.291979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:29.578 [2024-11-21 01:48:13.291987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:29.578 [2024-11-21 01:48:13.291995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:29.578 [2024-11-21 01:48:13.292003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:29.579 [2024-11-21 01:48:13.292012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:29.579 [2024-11-21 01:48:13.292020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:29.579 [2024-11-21 01:48:13.292029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:29.579 [2024-11-21 01:48:13.292037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:29.579 [2024-11-21 01:48:13.292051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.579 [2024-11-21 01:48:13.292058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:29.579 [2024-11-21 01:48:13.292067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:29.579 [2024-11-21 01:48:13.292074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.579 [2024-11-21 01:48:13.292081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:29.579 [2024-11-21 01:48:13.292089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:29.579 [2024-11-21 01:48:13.292095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.579 [2024-11-21 01:48:13.292102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:29.579 [2024-11-21 01:48:13.292109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:29.579 [2024-11-21 01:48:13.292115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.579 [2024-11-21 01:48:13.292121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:29.579 [2024-11-21 01:48:13.292129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:29.579 [2024-11-21 01:48:13.292136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.579 [2024-11-21 01:48:13.292143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:29.579 [2024-11-21 01:48:13.292150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:29.579 [2024-11-21 01:48:13.292157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.579 [2024-11-21 01:48:13.292163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:29.579 [2024-11-21 01:48:13.292170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:29.579 [2024-11-21 01:48:13.292178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:29.579 [2024-11-21 01:48:13.292184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:29.579 [2024-11-21 01:48:13.292190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:29.579 [2024-11-21 01:48:13.292197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:29.579 [2024-11-21 01:48:13.292204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:29.579 [2024-11-21 01:48:13.292212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:29.579 [2024-11-21 01:48:13.292218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.579 [2024-11-21 01:48:13.292226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:29.579 [2024-11-21 01:48:13.292233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:29.579 [2024-11-21 01:48:13.292241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.579 [2024-11-21 01:48:13.292249] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:29.579 [2024-11-21 01:48:13.292258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:29.579 [2024-11-21 01:48:13.292267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:29.579 [2024-11-21 01:48:13.292275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.579 [2024-11-21 01:48:13.292284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:29.579 [2024-11-21 01:48:13.292291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:29.579 [2024-11-21 01:48:13.292299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:29.579 [2024-11-21 01:48:13.292306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:29.579 [2024-11-21 01:48:13.292313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:29.579 [2024-11-21 01:48:13.292320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:29.579 [2024-11-21 01:48:13.292329] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:29.579 [2024-11-21 01:48:13.292340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:29.579 [2024-11-21 01:48:13.292350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:29.579 [2024-11-21 01:48:13.292358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:29.579 [2024-11-21 01:48:13.292365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:29.579 [2024-11-21 01:48:13.292372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:29.579 [2024-11-21 01:48:13.292379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:29.579 [2024-11-21 01:48:13.292388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:29.579 [2024-11-21 01:48:13.292395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:29.579 [2024-11-21 01:48:13.292402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:29.579 [2024-11-21 01:48:13.292409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:29.579 [2024-11-21 01:48:13.292416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:29.579 [2024-11-21 01:48:13.292423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:29.579 [2024-11-21 01:48:13.292430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:29.579 [2024-11-21 01:48:13.292437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:29.579 [2024-11-21 01:48:13.292445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:29.579 [2024-11-21 01:48:13.292453] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:29.579 [2024-11-21 01:48:13.292464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:29.579 [2024-11-21 01:48:13.292474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:29.579 [2024-11-21 01:48:13.292481] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:29.579 [2024-11-21 01:48:13.292489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:29.579 [2024-11-21 01:48:13.292497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:29.579 [2024-11-21 01:48:13.292510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.579 [2024-11-21 01:48:13.292519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:29.579 [2024-11-21 01:48:13.292528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:22:29.579 [2024-11-21 01:48:13.292536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.579 [2024-11-21 01:48:13.330998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.579 [2024-11-21 01:48:13.331223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:29.579 [2024-11-21 01:48:13.331507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.412 ms 00:22:29.579 [2024-11-21 01:48:13.331550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.579 [2024-11-21 01:48:13.331693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.579 [2024-11-21 01:48:13.331877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:29.579 [2024-11-21 01:48:13.331905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:22:29.579 [2024-11-21 01:48:13.331924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.579 [2024-11-21 01:48:13.379570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.579 [2024-11-21 01:48:13.379800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:29.579 [2024-11-21 01:48:13.379993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.566 ms 00:22:29.579 [2024-11-21 01:48:13.380024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.579 [2024-11-21 01:48:13.380088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.579 [2024-11-21 01:48:13.380114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:29.579 [2024-11-21 01:48:13.380136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:29.579 [2024-11-21 01:48:13.380230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.579 [2024-11-21 01:48:13.381057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.579 [2024-11-21 01:48:13.381127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:29.579 [2024-11-21 01:48:13.381219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:22:29.579 [2024-11-21 01:48:13.381290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.579 [2024-11-21 01:48:13.381508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.579 [2024-11-21 01:48:13.381569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:29.579 [2024-11-21 01:48:13.381597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:22:29.579 [2024-11-21 01:48:13.381643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.579 [2024-11-21 01:48:13.399751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.579 [2024-11-21 01:48:13.399907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:29.579 [2024-11-21 01:48:13.399972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.064 ms 00:22:29.579 [2024-11-21 01:48:13.399995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.579 [2024-11-21 01:48:13.415412] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:29.579 [2024-11-21 01:48:13.415589] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:29.579 [2024-11-21 01:48:13.415675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.579 [2024-11-21 01:48:13.415699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:29.579 [2024-11-21 01:48:13.415722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.548 ms 00:22:29.580 [2024-11-21 01:48:13.415749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.580 [2024-11-21 01:48:13.441732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.580 [2024-11-21 01:48:13.441893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:29.580 [2024-11-21 01:48:13.441954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.921 ms 00:22:29.580 [2024-11-21 01:48:13.441978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.580 [2024-11-21 01:48:13.455100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.580 [2024-11-21 01:48:13.455259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:29.580 [2024-11-21 01:48:13.455317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.970 ms 00:22:29.580 [2024-11-21 01:48:13.455341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.580 [2024-11-21 01:48:13.468088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.580 [2024-11-21 01:48:13.468248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:29.580 [2024-11-21 01:48:13.468267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.569 ms 00:22:29.580 [2024-11-21 01:48:13.468276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.580 [2024-11-21 01:48:13.468953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.580 [2024-11-21 01:48:13.468982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:29.580 [2024-11-21 01:48:13.468994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:22:29.580 [2024-11-21 01:48:13.469006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.841 [2024-11-21 01:48:13.540888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.841 [2024-11-21 01:48:13.540947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:29.841 [2024-11-21 01:48:13.540970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.861 ms 00:22:29.841 [2024-11-21 01:48:13.540980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.841 [2024-11-21 01:48:13.552495] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:29.841 [2024-11-21 01:48:13.556295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.841 [2024-11-21 01:48:13.556488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:29.841 [2024-11-21 01:48:13.556510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.259 ms 00:22:29.841 [2024-11-21 01:48:13.556521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.841 [2024-11-21 01:48:13.556627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.841 [2024-11-21 01:48:13.556641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:29.841 [2024-11-21 01:48:13.556651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:29.841 [2024-11-21 01:48:13.556666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.841 [2024-11-21 01:48:13.556751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.841 [2024-11-21 01:48:13.556764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:29.841 [2024-11-21 01:48:13.556773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:22:29.841 [2024-11-21 01:48:13.556784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.841 [2024-11-21 01:48:13.556810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.841 [2024-11-21 01:48:13.556821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:29.841 [2024-11-21 01:48:13.556830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:29.841 [2024-11-21 01:48:13.556838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.841 [2024-11-21 01:48:13.556879] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:29.841 [2024-11-21 01:48:13.556895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.841 [2024-11-21 01:48:13.556905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:29.841 [2024-11-21 01:48:13.556914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:29.841 [2024-11-21 01:48:13.556923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.841 [2024-11-21 01:48:13.583373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.841 [2024-11-21 01:48:13.583553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:29.841 [2024-11-21 01:48:13.583574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.425 ms 00:22:29.841 [2024-11-21 01:48:13.583591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.841 [2024-11-21 01:48:13.583694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.841 [2024-11-21 01:48:13.583708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:29.841 [2024-11-21 01:48:13.583718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:22:29.841 [2024-11-21 01:48:13.583726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.841 [2024-11-21 01:48:13.585143] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 329.358 ms, result 0 00:22:31.230  [2024-11-21T01:48:16.132Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-21T01:48:17.077Z] Copying: 22/1024 [MB] (10 MBps) [2024-11-21T01:48:18.023Z] Copying: 41/1024 [MB] (18 MBps) [2024-11-21T01:48:18.967Z] Copying: 59/1024 [MB] (18 MBps) [2024-11-21T01:48:19.910Z] Copying: 75/1024 [MB] (15 MBps) [2024-11-21T01:48:20.937Z] Copying: 92/1024 [MB] (16 MBps) [2024-11-21T01:48:21.880Z] Copying: 111/1024 [MB] (19 MBps) [2024-11-21T01:48:22.824Z] Copying: 131/1024 [MB] (20 MBps) [2024-11-21T01:48:24.212Z] Copying: 146/1024 [MB] (15 MBps) [2024-11-21T01:48:24.785Z] Copying: 163/1024 [MB] (16 MBps) [2024-11-21T01:48:26.173Z] Copying: 177/1024 [MB] (14 MBps) [2024-11-21T01:48:27.116Z] Copying: 188/1024 [MB] (10 MBps) [2024-11-21T01:48:28.058Z] Copying: 199/1024 [MB] (10 MBps) [2024-11-21T01:48:29.003Z] Copying: 210/1024 [MB] (10 MBps) [2024-11-21T01:48:29.945Z] Copying: 221/1024 [MB] (10 MBps) [2024-11-21T01:48:30.885Z] Copying: 232/1024 [MB] (11 MBps) [2024-11-21T01:48:31.828Z] Copying: 243/1024 [MB] (10 MBps) [2024-11-21T01:48:33.216Z] Copying: 254/1024 [MB] (10 MBps) [2024-11-21T01:48:33.788Z] Copying: 273/1024 [MB] (18 MBps) [2024-11-21T01:48:35.177Z] Copying: 286/1024 [MB] (13 MBps) [2024-11-21T01:48:36.120Z] Copying: 299/1024 [MB] (12 MBps) [2024-11-21T01:48:37.064Z] Copying: 314/1024 [MB] (15 MBps) [2024-11-21T01:48:38.009Z] Copying: 332/1024 [MB] (17 MBps) [2024-11-21T01:48:38.953Z] Copying: 351/1024 [MB] (19 MBps) [2024-11-21T01:48:39.895Z] Copying: 365/1024 [MB] (14 MBps) [2024-11-21T01:48:40.837Z] Copying: 378/1024 [MB] (13 MBps) [2024-11-21T01:48:41.782Z] Copying: 398/1024 [MB] (19 MBps) [2024-11-21T01:48:43.171Z] Copying: 416/1024 [MB] (18 MBps) [2024-11-21T01:48:44.115Z] Copying: 439/1024 [MB] (22 MBps) [2024-11-21T01:48:45.059Z] Copying: 453/1024 [MB] (14 MBps) [2024-11-21T01:48:46.002Z] Copying: 473/1024 [MB] (19 MBps) [2024-11-21T01:48:46.946Z] Copying: 495/1024 [MB] (22 MBps) [2024-11-21T01:48:47.889Z] Copying: 514/1024 [MB] (18 MBps) [2024-11-21T01:48:48.831Z] Copying: 530/1024 [MB] (16 MBps) [2024-11-21T01:48:50.211Z] Copying: 541/1024 [MB] (11 MBps) [2024-11-21T01:48:51.152Z] Copying: 553/1024 [MB] (11 MBps) [2024-11-21T01:48:52.118Z] Copying: 564/1024 [MB] (10 MBps) [2024-11-21T01:48:52.789Z] Copying: 574/1024 [MB] (10 MBps) [2024-11-21T01:48:54.177Z] Copying: 585/1024 [MB] (10 MBps) [2024-11-21T01:48:55.122Z] Copying: 595/1024 [MB] (10 MBps) [2024-11-21T01:48:56.067Z] Copying: 616/1024 [MB] (20 MBps) [2024-11-21T01:48:57.012Z] Copying: 627/1024 [MB] (10 MBps) [2024-11-21T01:48:57.957Z] Copying: 638/1024 [MB] (10 MBps) [2024-11-21T01:48:58.901Z] Copying: 648/1024 [MB] (10 MBps) [2024-11-21T01:48:59.844Z] Copying: 659/1024 [MB] (10 MBps) [2024-11-21T01:49:00.789Z] Copying: 683/1024 [MB] (24 MBps) [2024-11-21T01:49:02.179Z] Copying: 699/1024 [MB] (15 MBps) [2024-11-21T01:49:03.125Z] Copying: 718/1024 [MB] (18 MBps) [2024-11-21T01:49:04.069Z] Copying: 740/1024 [MB] (22 MBps) [2024-11-21T01:49:05.014Z] Copying: 756/1024 [MB] (15 MBps) [2024-11-21T01:49:05.961Z] Copying: 774/1024 [MB] (18 MBps) [2024-11-21T01:49:06.902Z] Copying: 794/1024 [MB] (20 MBps) [2024-11-21T01:49:07.845Z] Copying: 810/1024 [MB] (15 MBps) [2024-11-21T01:49:08.788Z] Copying: 829/1024 [MB] (19 MBps) [2024-11-21T01:49:10.170Z] Copying: 849/1024 [MB] (19 MBps) [2024-11-21T01:49:10.800Z] Copying: 865/1024 [MB] (15 MBps) [2024-11-21T01:49:12.181Z] Copying: 884/1024 [MB] (19 MBps) [2024-11-21T01:49:13.125Z] Copying: 905/1024 [MB] (20 MBps) [2024-11-21T01:49:14.069Z] Copying: 919/1024 [MB] (13 MBps) [2024-11-21T01:49:15.012Z] Copying: 932/1024 [MB] (13 MBps) [2024-11-21T01:49:15.956Z] Copying: 943/1024 [MB] (11 MBps) [2024-11-21T01:49:16.899Z] Copying: 954/1024 [MB] (10 MBps) [2024-11-21T01:49:17.843Z] Copying: 964/1024 [MB] (10 MBps) [2024-11-21T01:49:18.787Z] Copying: 975/1024 [MB] (10 MBps) [2024-11-21T01:49:20.180Z] Copying: 986/1024 [MB] (10 MBps) [2024-11-21T01:49:21.125Z] Copying: 996/1024 [MB] (10 MBps) [2024-11-21T01:49:21.700Z] Copying: 1012/1024 [MB] (16 MBps) [2024-11-21T01:49:21.700Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-21 01:49:21.476300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.743 [2024-11-21 01:49:21.476392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:37.743 [2024-11-21 01:49:21.476413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:37.743 [2024-11-21 01:49:21.476426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.743 [2024-11-21 01:49:21.476457] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:37.743 [2024-11-21 01:49:21.480919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.743 [2024-11-21 01:49:21.480989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:37.743 [2024-11-21 01:49:21.481014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.439 ms 00:23:37.743 [2024-11-21 01:49:21.481026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.743 [2024-11-21 01:49:21.481381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.743 [2024-11-21 01:49:21.481406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:37.743 [2024-11-21 01:49:21.481420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:23:37.743 [2024-11-21 01:49:21.481432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.743 [2024-11-21 01:49:21.487023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.743 [2024-11-21 01:49:21.487238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:37.743 [2024-11-21 01:49:21.487264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.570 ms 00:23:37.743 [2024-11-21 01:49:21.487277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.743 [2024-11-21 01:49:21.495328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.743 [2024-11-21 01:49:21.495371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:37.743 [2024-11-21 01:49:21.495383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.012 ms 00:23:37.743 [2024-11-21 01:49:21.495391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.743 [2024-11-21 01:49:21.521821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.743 [2024-11-21 01:49:21.521868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:37.743 [2024-11-21 01:49:21.521881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.341 ms 00:23:37.743 [2024-11-21 01:49:21.521889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.743 [2024-11-21 01:49:21.537626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.743 [2024-11-21 01:49:21.537671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:37.743 [2024-11-21 01:49:21.537684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.691 ms 00:23:37.743 [2024-11-21 01:49:21.537693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.743 [2024-11-21 01:49:21.537832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.743 [2024-11-21 01:49:21.537850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:37.743 [2024-11-21 01:49:21.537859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:23:37.743 [2024-11-21 01:49:21.537867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.743 [2024-11-21 01:49:21.563178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.743 [2024-11-21 01:49:21.563351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:37.743 [2024-11-21 01:49:21.563371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.294 ms 00:23:37.743 [2024-11-21 01:49:21.563378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.743 [2024-11-21 01:49:21.588686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.743 [2024-11-21 01:49:21.588741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:37.743 [2024-11-21 01:49:21.588753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.271 ms 00:23:37.743 [2024-11-21 01:49:21.588761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.743 [2024-11-21 01:49:21.612820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.743 [2024-11-21 01:49:21.612866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:37.743 [2024-11-21 01:49:21.612877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.016 ms 00:23:37.743 [2024-11-21 01:49:21.612885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.743 [2024-11-21 01:49:21.637758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.743 [2024-11-21 01:49:21.637932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:37.743 [2024-11-21 01:49:21.637951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.801 ms 00:23:37.743 [2024-11-21 01:49:21.637959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.743 [2024-11-21 01:49:21.638059] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:37.743 [2024-11-21 01:49:21.638091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:37.743 [2024-11-21 01:49:21.638326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:37.744 [2024-11-21 01:49:21.638921] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:37.744 [2024-11-21 01:49:21.638932] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f555680a-895d-4d20-a79f-de599ad6b77b 00:23:37.744 [2024-11-21 01:49:21.638940] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:37.744 [2024-11-21 01:49:21.638948] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:37.744 [2024-11-21 01:49:21.638956] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:37.744 [2024-11-21 01:49:21.638964] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:37.744 [2024-11-21 01:49:21.638971] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:37.744 [2024-11-21 01:49:21.638979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:37.744 [2024-11-21 01:49:21.638993] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:37.744 [2024-11-21 01:49:21.639000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:37.744 [2024-11-21 01:49:21.639006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:37.744 [2024-11-21 01:49:21.639014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.744 [2024-11-21 01:49:21.639022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:37.744 [2024-11-21 01:49:21.639032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:23:37.744 [2024-11-21 01:49:21.639039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.744 [2024-11-21 01:49:21.652606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.744 [2024-11-21 01:49:21.652657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:37.744 [2024-11-21 01:49:21.652668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.532 ms 00:23:37.744 [2024-11-21 01:49:21.652676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.744 [2024-11-21 01:49:21.653069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.744 [2024-11-21 01:49:21.653083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:37.744 [2024-11-21 01:49:21.653093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:23:37.744 [2024-11-21 01:49:21.653107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.744 [2024-11-21 01:49:21.689240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.744 [2024-11-21 01:49:21.689297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:37.744 [2024-11-21 01:49:21.689310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.745 [2024-11-21 01:49:21.689320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.745 [2024-11-21 01:49:21.689395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.745 [2024-11-21 01:49:21.689405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:37.745 [2024-11-21 01:49:21.689415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.745 [2024-11-21 01:49:21.689430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.745 [2024-11-21 01:49:21.689517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.745 [2024-11-21 01:49:21.689528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:37.745 [2024-11-21 01:49:21.689537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.745 [2024-11-21 01:49:21.689546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.745 [2024-11-21 01:49:21.689563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:37.745 [2024-11-21 01:49:21.689572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:37.745 [2024-11-21 01:49:21.689581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:37.745 [2024-11-21 01:49:21.689590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.006 [2024-11-21 01:49:21.773168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.006 [2024-11-21 01:49:21.773221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:38.006 [2024-11-21 01:49:21.773235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.006 [2024-11-21 01:49:21.773243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.006 [2024-11-21 01:49:21.841726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.006 [2024-11-21 01:49:21.841777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:38.006 [2024-11-21 01:49:21.841789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.006 [2024-11-21 01:49:21.841798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.006 [2024-11-21 01:49:21.841868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.006 [2024-11-21 01:49:21.841878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:38.006 [2024-11-21 01:49:21.841887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.006 [2024-11-21 01:49:21.841896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.006 [2024-11-21 01:49:21.841958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.006 [2024-11-21 01:49:21.841969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:38.006 [2024-11-21 01:49:21.841978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.006 [2024-11-21 01:49:21.841986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.006 [2024-11-21 01:49:21.842086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.006 [2024-11-21 01:49:21.842097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:38.006 [2024-11-21 01:49:21.842106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.006 [2024-11-21 01:49:21.842114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.006 [2024-11-21 01:49:21.842147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.006 [2024-11-21 01:49:21.842157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:38.006 [2024-11-21 01:49:21.842165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.006 [2024-11-21 01:49:21.842174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.006 [2024-11-21 01:49:21.842218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.006 [2024-11-21 01:49:21.842231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:38.006 [2024-11-21 01:49:21.842239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.006 [2024-11-21 01:49:21.842247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.006 [2024-11-21 01:49:21.842294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.006 [2024-11-21 01:49:21.842305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:38.006 [2024-11-21 01:49:21.842314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.006 [2024-11-21 01:49:21.842322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.006 [2024-11-21 01:49:21.842458] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 366.129 ms, result 0 00:23:38.951 00:23:38.951 00:23:38.951 01:49:22 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:40.921 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:40.921 01:49:24 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:40.921 [2024-11-21 01:49:24.822902] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:23:40.921 [2024-11-21 01:49:24.822991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78814 ] 00:23:41.182 [2024-11-21 01:49:24.976140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.182 [2024-11-21 01:49:25.080386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.445 [2024-11-21 01:49:25.367864] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:41.445 [2024-11-21 01:49:25.367949] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:41.708 [2024-11-21 01:49:25.528527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.708 [2024-11-21 01:49:25.528586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:41.708 [2024-11-21 01:49:25.528608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:41.708 [2024-11-21 01:49:25.528639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.708 [2024-11-21 01:49:25.528694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.708 [2024-11-21 01:49:25.528706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:41.708 [2024-11-21 01:49:25.528718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:41.708 [2024-11-21 01:49:25.528726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.708 [2024-11-21 01:49:25.528747] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:41.708 [2024-11-21 01:49:25.529507] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:41.708 [2024-11-21 01:49:25.529533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.708 [2024-11-21 01:49:25.529541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:41.708 [2024-11-21 01:49:25.529550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:23:41.708 [2024-11-21 01:49:25.529558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.708 [2024-11-21 01:49:25.531237] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:41.708 [2024-11-21 01:49:25.545167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.708 [2024-11-21 01:49:25.545218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:41.708 [2024-11-21 01:49:25.545233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.932 ms 00:23:41.708 [2024-11-21 01:49:25.545242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.708 [2024-11-21 01:49:25.545322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.708 [2024-11-21 01:49:25.545332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:41.708 [2024-11-21 01:49:25.545346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:41.708 [2024-11-21 01:49:25.545353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.708 [2024-11-21 01:49:25.553352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.708 [2024-11-21 01:49:25.553578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:41.708 [2024-11-21 01:49:25.553597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.896 ms 00:23:41.708 [2024-11-21 01:49:25.553606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.708 [2024-11-21 01:49:25.553718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.708 [2024-11-21 01:49:25.553728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:41.708 [2024-11-21 01:49:25.553738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:41.708 [2024-11-21 01:49:25.553746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.708 [2024-11-21 01:49:25.553790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.708 [2024-11-21 01:49:25.553800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:41.708 [2024-11-21 01:49:25.553809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:41.708 [2024-11-21 01:49:25.553817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.708 [2024-11-21 01:49:25.553840] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:41.708 [2024-11-21 01:49:25.557715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.708 [2024-11-21 01:49:25.557752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:41.708 [2024-11-21 01:49:25.557763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.881 ms 00:23:41.708 [2024-11-21 01:49:25.557774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.708 [2024-11-21 01:49:25.557808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.708 [2024-11-21 01:49:25.557816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:41.708 [2024-11-21 01:49:25.557825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:41.708 [2024-11-21 01:49:25.557833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.708 [2024-11-21 01:49:25.557885] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:41.708 [2024-11-21 01:49:25.557908] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:41.708 [2024-11-21 01:49:25.557945] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:41.708 [2024-11-21 01:49:25.557964] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:41.708 [2024-11-21 01:49:25.558071] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:41.708 [2024-11-21 01:49:25.558083] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:41.709 [2024-11-21 01:49:25.558094] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:41.709 [2024-11-21 01:49:25.558105] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:41.709 [2024-11-21 01:49:25.558114] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:41.709 [2024-11-21 01:49:25.558123] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:41.709 [2024-11-21 01:49:25.558131] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:41.709 [2024-11-21 01:49:25.558140] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:41.709 [2024-11-21 01:49:25.558147] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:41.709 [2024-11-21 01:49:25.558159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.709 [2024-11-21 01:49:25.558167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:41.709 [2024-11-21 01:49:25.558176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:23:41.709 [2024-11-21 01:49:25.558184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.709 [2024-11-21 01:49:25.558266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.709 [2024-11-21 01:49:25.558275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:41.709 [2024-11-21 01:49:25.558283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:41.709 [2024-11-21 01:49:25.558290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.709 [2024-11-21 01:49:25.558393] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:41.709 [2024-11-21 01:49:25.558406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:41.709 [2024-11-21 01:49:25.558415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:41.709 [2024-11-21 01:49:25.558423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:41.709 [2024-11-21 01:49:25.558431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:41.709 [2024-11-21 01:49:25.558438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:41.709 [2024-11-21 01:49:25.558445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:41.709 [2024-11-21 01:49:25.558453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:41.709 [2024-11-21 01:49:25.558461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:41.709 [2024-11-21 01:49:25.558468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:41.709 [2024-11-21 01:49:25.558475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:41.709 [2024-11-21 01:49:25.558482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:41.709 [2024-11-21 01:49:25.558489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:41.709 [2024-11-21 01:49:25.558496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:41.709 [2024-11-21 01:49:25.558507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:41.709 [2024-11-21 01:49:25.558520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:41.709 [2024-11-21 01:49:25.558528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:41.709 [2024-11-21 01:49:25.558535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:41.709 [2024-11-21 01:49:25.558541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:41.709 [2024-11-21 01:49:25.558548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:41.709 [2024-11-21 01:49:25.558555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:41.709 [2024-11-21 01:49:25.558561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:41.709 [2024-11-21 01:49:25.558568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:41.709 [2024-11-21 01:49:25.558575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:41.709 [2024-11-21 01:49:25.558582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:41.709 [2024-11-21 01:49:25.558588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:41.709 [2024-11-21 01:49:25.558594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:41.709 [2024-11-21 01:49:25.558601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:41.709 [2024-11-21 01:49:25.558607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:41.709 [2024-11-21 01:49:25.558637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:41.709 [2024-11-21 01:49:25.558644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:41.709 [2024-11-21 01:49:25.558651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:41.709 [2024-11-21 01:49:25.558659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:41.709 [2024-11-21 01:49:25.558666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:41.709 [2024-11-21 01:49:25.558674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:41.709 [2024-11-21 01:49:25.558681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:41.709 [2024-11-21 01:49:25.558687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:41.709 [2024-11-21 01:49:25.558694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:41.709 [2024-11-21 01:49:25.558701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:41.709 [2024-11-21 01:49:25.558708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:41.709 [2024-11-21 01:49:25.558715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:41.709 [2024-11-21 01:49:25.558722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:41.709 [2024-11-21 01:49:25.558730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:41.709 [2024-11-21 01:49:25.558737] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:41.709 [2024-11-21 01:49:25.558745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:41.709 [2024-11-21 01:49:25.558752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:41.709 [2024-11-21 01:49:25.558768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:41.709 [2024-11-21 01:49:25.558776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:41.709 [2024-11-21 01:49:25.558783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:41.709 [2024-11-21 01:49:25.558791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:41.709 [2024-11-21 01:49:25.558798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:41.709 [2024-11-21 01:49:25.558805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:41.709 [2024-11-21 01:49:25.558812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:41.709 [2024-11-21 01:49:25.558821] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:41.709 [2024-11-21 01:49:25.558830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:41.709 [2024-11-21 01:49:25.558839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:41.709 [2024-11-21 01:49:25.558847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:41.709 [2024-11-21 01:49:25.558854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:41.709 [2024-11-21 01:49:25.558861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:41.709 [2024-11-21 01:49:25.558869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:41.709 [2024-11-21 01:49:25.558876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:41.709 [2024-11-21 01:49:25.558883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:41.709 [2024-11-21 01:49:25.558891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:41.709 [2024-11-21 01:49:25.558899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:41.709 [2024-11-21 01:49:25.558906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:41.709 [2024-11-21 01:49:25.558913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:41.709 [2024-11-21 01:49:25.558921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:41.709 [2024-11-21 01:49:25.558928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:41.709 [2024-11-21 01:49:25.558935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:41.709 [2024-11-21 01:49:25.558943] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:41.709 [2024-11-21 01:49:25.558955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:41.709 [2024-11-21 01:49:25.558963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:41.709 [2024-11-21 01:49:25.558971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:41.709 [2024-11-21 01:49:25.558978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:41.709 [2024-11-21 01:49:25.558986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:41.709 [2024-11-21 01:49:25.558994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.709 [2024-11-21 01:49:25.559002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:41.709 [2024-11-21 01:49:25.559009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:23:41.709 [2024-11-21 01:49:25.559017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.709 [2024-11-21 01:49:25.590631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.709 [2024-11-21 01:49:25.590679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:41.709 [2024-11-21 01:49:25.590692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.569 ms 00:23:41.710 [2024-11-21 01:49:25.590700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.710 [2024-11-21 01:49:25.590795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.710 [2024-11-21 01:49:25.590804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:41.710 [2024-11-21 01:49:25.590813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:41.710 [2024-11-21 01:49:25.590821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.710 [2024-11-21 01:49:25.637802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.710 [2024-11-21 01:49:25.637852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:41.710 [2024-11-21 01:49:25.637866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.922 ms 00:23:41.710 [2024-11-21 01:49:25.637875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.710 [2024-11-21 01:49:25.637924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.710 [2024-11-21 01:49:25.637934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:41.710 [2024-11-21 01:49:25.637944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:41.710 [2024-11-21 01:49:25.637956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.710 [2024-11-21 01:49:25.638500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.710 [2024-11-21 01:49:25.638523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:41.710 [2024-11-21 01:49:25.638535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:23:41.710 [2024-11-21 01:49:25.638544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.710 [2024-11-21 01:49:25.638740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.710 [2024-11-21 01:49:25.638751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:41.710 [2024-11-21 01:49:25.638760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:23:41.710 [2024-11-21 01:49:25.638774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.710 [2024-11-21 01:49:25.654358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.710 [2024-11-21 01:49:25.654404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:41.710 [2024-11-21 01:49:25.654418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.557 ms 00:23:41.710 [2024-11-21 01:49:25.654426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.972 [2024-11-21 01:49:25.668941] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:41.972 [2024-11-21 01:49:25.668991] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:41.972 [2024-11-21 01:49:25.669005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.972 [2024-11-21 01:49:25.669013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:41.972 [2024-11-21 01:49:25.669022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.473 ms 00:23:41.972 [2024-11-21 01:49:25.669029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.972 [2024-11-21 01:49:25.694833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.972 [2024-11-21 01:49:25.694887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:41.972 [2024-11-21 01:49:25.694899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.752 ms 00:23:41.972 [2024-11-21 01:49:25.694907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.972 [2024-11-21 01:49:25.707391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.972 [2024-11-21 01:49:25.707437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:41.972 [2024-11-21 01:49:25.707449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.431 ms 00:23:41.972 [2024-11-21 01:49:25.707456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.972 [2024-11-21 01:49:25.720161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.972 [2024-11-21 01:49:25.720204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:41.972 [2024-11-21 01:49:25.720216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.658 ms 00:23:41.972 [2024-11-21 01:49:25.720224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.972 [2024-11-21 01:49:25.720895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.972 [2024-11-21 01:49:25.720919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:41.972 [2024-11-21 01:49:25.720930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:23:41.972 [2024-11-21 01:49:25.720941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.972 [2024-11-21 01:49:25.784817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.972 [2024-11-21 01:49:25.784881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:41.972 [2024-11-21 01:49:25.784904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.855 ms 00:23:41.972 [2024-11-21 01:49:25.784913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.972 [2024-11-21 01:49:25.795903] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:41.972 [2024-11-21 01:49:25.798651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.972 [2024-11-21 01:49:25.798830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:41.972 [2024-11-21 01:49:25.798849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.680 ms 00:23:41.972 [2024-11-21 01:49:25.798858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.972 [2024-11-21 01:49:25.798948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.972 [2024-11-21 01:49:25.798959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:41.972 [2024-11-21 01:49:25.798969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:41.972 [2024-11-21 01:49:25.798980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.972 [2024-11-21 01:49:25.799052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.972 [2024-11-21 01:49:25.799065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:41.972 [2024-11-21 01:49:25.799074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:41.972 [2024-11-21 01:49:25.799082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.972 [2024-11-21 01:49:25.799104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.972 [2024-11-21 01:49:25.799113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:41.972 [2024-11-21 01:49:25.799122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:41.972 [2024-11-21 01:49:25.799130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.972 [2024-11-21 01:49:25.799166] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:41.972 [2024-11-21 01:49:25.799180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.972 [2024-11-21 01:49:25.799190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:41.972 [2024-11-21 01:49:25.799199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:41.972 [2024-11-21 01:49:25.799208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.972 [2024-11-21 01:49:25.824801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.972 [2024-11-21 01:49:25.824848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:41.972 [2024-11-21 01:49:25.824862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.574 ms 00:23:41.972 [2024-11-21 01:49:25.824877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.972 [2024-11-21 01:49:25.824964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.972 [2024-11-21 01:49:25.824974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:41.972 [2024-11-21 01:49:25.824983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:41.972 [2024-11-21 01:49:25.824993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.972 [2024-11-21 01:49:25.826278] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 297.230 ms, result 0 00:23:42.917  [2024-11-21T01:49:28.260Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-21T01:49:29.205Z] Copying: 35/1024 [MB] (17 MBps) [2024-11-21T01:49:30.149Z] Copying: 51/1024 [MB] (16 MBps) [2024-11-21T01:49:31.092Z] Copying: 70/1024 [MB] (18 MBps) [2024-11-21T01:49:32.035Z] Copying: 90/1024 [MB] (19 MBps) [2024-11-21T01:49:32.978Z] Copying: 107/1024 [MB] (17 MBps) [2024-11-21T01:49:33.923Z] Copying: 122/1024 [MB] (15 MBps) [2024-11-21T01:49:34.867Z] Copying: 140/1024 [MB] (17 MBps) [2024-11-21T01:49:36.254Z] Copying: 151/1024 [MB] (10 MBps) [2024-11-21T01:49:37.199Z] Copying: 161/1024 [MB] (10 MBps) [2024-11-21T01:49:38.143Z] Copying: 171/1024 [MB] (10 MBps) [2024-11-21T01:49:39.087Z] Copying: 182/1024 [MB] (10 MBps) [2024-11-21T01:49:40.043Z] Copying: 225/1024 [MB] (42 MBps) [2024-11-21T01:49:40.989Z] Copying: 253/1024 [MB] (28 MBps) [2024-11-21T01:49:41.935Z] Copying: 265/1024 [MB] (12 MBps) [2024-11-21T01:49:42.879Z] Copying: 279/1024 [MB] (13 MBps) [2024-11-21T01:49:44.266Z] Copying: 299/1024 [MB] (19 MBps) [2024-11-21T01:49:44.842Z] Copying: 329/1024 [MB] (30 MBps) [2024-11-21T01:49:46.230Z] Copying: 373/1024 [MB] (43 MBps) [2024-11-21T01:49:47.176Z] Copying: 388/1024 [MB] (14 MBps) [2024-11-21T01:49:48.121Z] Copying: 407/1024 [MB] (19 MBps) [2024-11-21T01:49:49.064Z] Copying: 424/1024 [MB] (17 MBps) [2024-11-21T01:49:50.022Z] Copying: 442/1024 [MB] (18 MBps) [2024-11-21T01:49:50.964Z] Copying: 454/1024 [MB] (12 MBps) [2024-11-21T01:49:51.906Z] Copying: 474/1024 [MB] (19 MBps) [2024-11-21T01:49:52.851Z] Copying: 491/1024 [MB] (16 MBps) [2024-11-21T01:49:54.238Z] Copying: 512/1024 [MB] (21 MBps) [2024-11-21T01:49:55.182Z] Copying: 530/1024 [MB] (17 MBps) [2024-11-21T01:49:56.210Z] Copying: 544/1024 [MB] (14 MBps) [2024-11-21T01:49:57.172Z] Copying: 565/1024 [MB] (21 MBps) [2024-11-21T01:49:58.117Z] Copying: 585/1024 [MB] (19 MBps) [2024-11-21T01:49:59.061Z] Copying: 599/1024 [MB] (14 MBps) [2024-11-21T01:50:00.005Z] Copying: 610/1024 [MB] (10 MBps) [2024-11-21T01:50:00.949Z] Copying: 620/1024 [MB] (10 MBps) [2024-11-21T01:50:01.895Z] Copying: 631/1024 [MB] (10 MBps) [2024-11-21T01:50:03.281Z] Copying: 664/1024 [MB] (32 MBps) [2024-11-21T01:50:03.852Z] Copying: 686/1024 [MB] (22 MBps) [2024-11-21T01:50:05.240Z] Copying: 705/1024 [MB] (19 MBps) [2024-11-21T01:50:06.183Z] Copying: 724/1024 [MB] (18 MBps) [2024-11-21T01:50:07.127Z] Copying: 742/1024 [MB] (18 MBps) [2024-11-21T01:50:08.069Z] Copying: 782/1024 [MB] (40 MBps) [2024-11-21T01:50:09.013Z] Copying: 820/1024 [MB] (37 MBps) [2024-11-21T01:50:09.955Z] Copying: 835/1024 [MB] (14 MBps) [2024-11-21T01:50:10.896Z] Copying: 854/1024 [MB] (19 MBps) [2024-11-21T01:50:12.283Z] Copying: 873/1024 [MB] (18 MBps) [2024-11-21T01:50:12.856Z] Copying: 892/1024 [MB] (19 MBps) [2024-11-21T01:50:14.241Z] Copying: 912/1024 [MB] (19 MBps) [2024-11-21T01:50:15.187Z] Copying: 931/1024 [MB] (19 MBps) [2024-11-21T01:50:16.130Z] Copying: 950/1024 [MB] (18 MBps) [2024-11-21T01:50:17.073Z] Copying: 970/1024 [MB] (20 MBps) [2024-11-21T01:50:18.015Z] Copying: 985/1024 [MB] (15 MBps) [2024-11-21T01:50:18.960Z] Copying: 1000/1024 [MB] (14 MBps) [2024-11-21T01:50:19.902Z] Copying: 1014/1024 [MB] (13 MBps) [2024-11-21T01:50:20.475Z] Copying: 1048192/1048576 [kB] (9840 kBps) [2024-11-21T01:50:20.475Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-21 01:50:20.217596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.518 [2024-11-21 01:50:20.217692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:36.518 [2024-11-21 01:50:20.217709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:36.518 [2024-11-21 01:50:20.217732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.518 [2024-11-21 01:50:20.219970] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:36.518 [2024-11-21 01:50:20.224362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.518 [2024-11-21 01:50:20.224411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:36.518 [2024-11-21 01:50:20.224423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.342 ms 00:24:36.518 [2024-11-21 01:50:20.224434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.518 [2024-11-21 01:50:20.237264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.518 [2024-11-21 01:50:20.237327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:36.518 [2024-11-21 01:50:20.237342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.058 ms 00:24:36.518 [2024-11-21 01:50:20.237351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.518 [2024-11-21 01:50:20.263178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.518 [2024-11-21 01:50:20.263251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:36.518 [2024-11-21 01:50:20.263264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.782 ms 00:24:36.518 [2024-11-21 01:50:20.263273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.518 [2024-11-21 01:50:20.269471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.518 [2024-11-21 01:50:20.269512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:36.518 [2024-11-21 01:50:20.269523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.157 ms 00:24:36.518 [2024-11-21 01:50:20.269531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.518 [2024-11-21 01:50:20.296070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.518 [2024-11-21 01:50:20.296122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:36.518 [2024-11-21 01:50:20.296135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.487 ms 00:24:36.518 [2024-11-21 01:50:20.296142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.518 [2024-11-21 01:50:20.312320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.518 [2024-11-21 01:50:20.312379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:36.518 [2024-11-21 01:50:20.312393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.129 ms 00:24:36.518 [2024-11-21 01:50:20.312401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.780 [2024-11-21 01:50:20.605490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.780 [2024-11-21 01:50:20.605558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:36.780 [2024-11-21 01:50:20.605572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 293.035 ms 00:24:36.780 [2024-11-21 01:50:20.605581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.780 [2024-11-21 01:50:20.631306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.780 [2024-11-21 01:50:20.631357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:36.780 [2024-11-21 01:50:20.631370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.708 ms 00:24:36.780 [2024-11-21 01:50:20.631378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.780 [2024-11-21 01:50:20.656674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.780 [2024-11-21 01:50:20.656742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:36.780 [2024-11-21 01:50:20.656754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.249 ms 00:24:36.780 [2024-11-21 01:50:20.656761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.780 [2024-11-21 01:50:20.681673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.780 [2024-11-21 01:50:20.681721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:36.780 [2024-11-21 01:50:20.681733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.864 ms 00:24:36.780 [2024-11-21 01:50:20.681740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.780 [2024-11-21 01:50:20.706197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.780 [2024-11-21 01:50:20.706240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:36.780 [2024-11-21 01:50:20.706252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.383 ms 00:24:36.780 [2024-11-21 01:50:20.706259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.780 [2024-11-21 01:50:20.706302] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:36.780 [2024-11-21 01:50:20.706318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 104192 / 261120 wr_cnt: 1 state: open 00:24:36.780 [2024-11-21 01:50:20.706328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:36.780 [2024-11-21 01:50:20.706337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:36.780 [2024-11-21 01:50:20.706345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:36.780 [2024-11-21 01:50:20.706353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:36.780 [2024-11-21 01:50:20.706360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:36.780 [2024-11-21 01:50:20.706368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:36.780 [2024-11-21 01:50:20.706375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:36.780 [2024-11-21 01:50:20.706384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:36.780 [2024-11-21 01:50:20.706392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:36.780 [2024-11-21 01:50:20.706400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:36.780 [2024-11-21 01:50:20.706410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:36.780 [2024-11-21 01:50:20.706419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:36.780 [2024-11-21 01:50:20.706427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:36.780 [2024-11-21 01:50:20.706435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:36.780 [2024-11-21 01:50:20.706442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.706997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.707004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.707012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.707019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.707027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.707034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.707041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.707048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.707056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.707063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.707070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.707078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.707086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.707094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.707102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.707110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.707117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.707125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:36.781 [2024-11-21 01:50:20.707142] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:36.781 [2024-11-21 01:50:20.707151] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f555680a-895d-4d20-a79f-de599ad6b77b 00:24:36.781 [2024-11-21 01:50:20.707159] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 104192 00:24:36.781 [2024-11-21 01:50:20.707167] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 105152 00:24:36.781 [2024-11-21 01:50:20.707175] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 104192 00:24:36.781 [2024-11-21 01:50:20.707184] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0092 00:24:36.782 [2024-11-21 01:50:20.707192] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:36.782 [2024-11-21 01:50:20.707204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:36.782 [2024-11-21 01:50:20.707218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:36.782 [2024-11-21 01:50:20.707225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:36.782 [2024-11-21 01:50:20.707232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:36.782 [2024-11-21 01:50:20.707240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.782 [2024-11-21 01:50:20.707248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:36.782 [2024-11-21 01:50:20.707257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.939 ms 00:24:36.782 [2024-11-21 01:50:20.707265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.782 [2024-11-21 01:50:20.720651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.782 [2024-11-21 01:50:20.720696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:36.782 [2024-11-21 01:50:20.720708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.366 ms 00:24:36.782 [2024-11-21 01:50:20.720723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.782 [2024-11-21 01:50:20.721124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.782 [2024-11-21 01:50:20.721135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:36.782 [2024-11-21 01:50:20.721144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:24:36.782 [2024-11-21 01:50:20.721152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.043 [2024-11-21 01:50:20.757508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.043 [2024-11-21 01:50:20.757563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:37.043 [2024-11-21 01:50:20.757579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.043 [2024-11-21 01:50:20.757588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.043 [2024-11-21 01:50:20.757668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.043 [2024-11-21 01:50:20.757678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:37.043 [2024-11-21 01:50:20.757688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.043 [2024-11-21 01:50:20.757698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.043 [2024-11-21 01:50:20.757765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.043 [2024-11-21 01:50:20.757777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:37.043 [2024-11-21 01:50:20.757787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.043 [2024-11-21 01:50:20.757799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.043 [2024-11-21 01:50:20.757815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.043 [2024-11-21 01:50:20.757824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:37.043 [2024-11-21 01:50:20.757833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.043 [2024-11-21 01:50:20.757841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.043 [2024-11-21 01:50:20.841280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.043 [2024-11-21 01:50:20.841341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:37.043 [2024-11-21 01:50:20.841361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.043 [2024-11-21 01:50:20.841370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.043 [2024-11-21 01:50:20.910171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.043 [2024-11-21 01:50:20.910232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:37.043 [2024-11-21 01:50:20.910244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.043 [2024-11-21 01:50:20.910253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.043 [2024-11-21 01:50:20.910337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.043 [2024-11-21 01:50:20.910347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:37.043 [2024-11-21 01:50:20.910356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.043 [2024-11-21 01:50:20.910365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.043 [2024-11-21 01:50:20.910413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.043 [2024-11-21 01:50:20.910423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:37.043 [2024-11-21 01:50:20.910431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.043 [2024-11-21 01:50:20.910439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.043 [2024-11-21 01:50:20.910534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.043 [2024-11-21 01:50:20.910545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:37.043 [2024-11-21 01:50:20.910554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.043 [2024-11-21 01:50:20.910562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.043 [2024-11-21 01:50:20.910596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.043 [2024-11-21 01:50:20.910606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:37.043 [2024-11-21 01:50:20.910633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.043 [2024-11-21 01:50:20.910641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.043 [2024-11-21 01:50:20.910682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.043 [2024-11-21 01:50:20.910692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:37.043 [2024-11-21 01:50:20.910701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.043 [2024-11-21 01:50:20.910709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.043 [2024-11-21 01:50:20.910759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:37.043 [2024-11-21 01:50:20.910770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:37.043 [2024-11-21 01:50:20.910778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:37.043 [2024-11-21 01:50:20.910786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.043 [2024-11-21 01:50:20.910921] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 693.802 ms, result 0 00:24:38.431 00:24:38.431 00:24:38.431 01:50:22 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:38.431 [2024-11-21 01:50:22.309340] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:24:38.431 [2024-11-21 01:50:22.309526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79391 ] 00:24:38.692 [2024-11-21 01:50:22.476552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.692 [2024-11-21 01:50:22.597687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.953 [2024-11-21 01:50:22.887411] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:38.953 [2024-11-21 01:50:22.887492] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:39.215 [2024-11-21 01:50:23.048803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.215 [2024-11-21 01:50:23.048867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:39.215 [2024-11-21 01:50:23.048887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:39.215 [2024-11-21 01:50:23.048896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.215 [2024-11-21 01:50:23.048952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.216 [2024-11-21 01:50:23.048963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:39.216 [2024-11-21 01:50:23.048975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:39.216 [2024-11-21 01:50:23.048983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.216 [2024-11-21 01:50:23.049005] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:39.216 [2024-11-21 01:50:23.049790] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:39.216 [2024-11-21 01:50:23.049811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.216 [2024-11-21 01:50:23.049819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:39.216 [2024-11-21 01:50:23.049828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.812 ms 00:24:39.216 [2024-11-21 01:50:23.049837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.216 [2024-11-21 01:50:23.051556] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:39.216 [2024-11-21 01:50:23.065733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.216 [2024-11-21 01:50:23.065783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:39.216 [2024-11-21 01:50:23.065798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.179 ms 00:24:39.216 [2024-11-21 01:50:23.065806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.216 [2024-11-21 01:50:23.065890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.216 [2024-11-21 01:50:23.065901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:39.216 [2024-11-21 01:50:23.065913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:39.216 [2024-11-21 01:50:23.065920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.216 [2024-11-21 01:50:23.074175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.216 [2024-11-21 01:50:23.074221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:39.216 [2024-11-21 01:50:23.074233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.177 ms 00:24:39.216 [2024-11-21 01:50:23.074241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.216 [2024-11-21 01:50:23.074327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.216 [2024-11-21 01:50:23.074337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:39.216 [2024-11-21 01:50:23.074346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:39.216 [2024-11-21 01:50:23.074354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.216 [2024-11-21 01:50:23.074397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.216 [2024-11-21 01:50:23.074408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:39.216 [2024-11-21 01:50:23.074417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:39.216 [2024-11-21 01:50:23.074425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.216 [2024-11-21 01:50:23.074450] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:39.216 [2024-11-21 01:50:23.078535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.216 [2024-11-21 01:50:23.078573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:39.216 [2024-11-21 01:50:23.078585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.090 ms 00:24:39.216 [2024-11-21 01:50:23.078596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.216 [2024-11-21 01:50:23.078645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.216 [2024-11-21 01:50:23.078654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:39.216 [2024-11-21 01:50:23.078663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:39.216 [2024-11-21 01:50:23.078671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.216 [2024-11-21 01:50:23.078722] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:39.216 [2024-11-21 01:50:23.078746] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:39.216 [2024-11-21 01:50:23.078783] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:39.216 [2024-11-21 01:50:23.078802] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:39.216 [2024-11-21 01:50:23.078908] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:39.216 [2024-11-21 01:50:23.078920] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:39.216 [2024-11-21 01:50:23.078931] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:39.216 [2024-11-21 01:50:23.078942] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:39.216 [2024-11-21 01:50:23.078951] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:39.216 [2024-11-21 01:50:23.078960] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:39.216 [2024-11-21 01:50:23.078968] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:39.216 [2024-11-21 01:50:23.078976] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:39.216 [2024-11-21 01:50:23.078984] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:39.216 [2024-11-21 01:50:23.078995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.216 [2024-11-21 01:50:23.079003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:39.216 [2024-11-21 01:50:23.079010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:24:39.216 [2024-11-21 01:50:23.079017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.216 [2024-11-21 01:50:23.079099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.216 [2024-11-21 01:50:23.079108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:39.216 [2024-11-21 01:50:23.079115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:39.216 [2024-11-21 01:50:23.079123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.216 [2024-11-21 01:50:23.079229] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:39.216 [2024-11-21 01:50:23.079242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:39.216 [2024-11-21 01:50:23.079251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:39.216 [2024-11-21 01:50:23.079259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:39.216 [2024-11-21 01:50:23.079267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:39.216 [2024-11-21 01:50:23.079274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:39.216 [2024-11-21 01:50:23.079282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:39.216 [2024-11-21 01:50:23.079289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:39.216 [2024-11-21 01:50:23.079296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:39.216 [2024-11-21 01:50:23.079302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:39.216 [2024-11-21 01:50:23.079309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:39.216 [2024-11-21 01:50:23.079316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:39.216 [2024-11-21 01:50:23.079323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:39.216 [2024-11-21 01:50:23.079331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:39.216 [2024-11-21 01:50:23.079339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:39.216 [2024-11-21 01:50:23.079353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:39.216 [2024-11-21 01:50:23.079360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:39.216 [2024-11-21 01:50:23.079367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:39.216 [2024-11-21 01:50:23.079373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:39.216 [2024-11-21 01:50:23.079380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:39.216 [2024-11-21 01:50:23.079387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:39.216 [2024-11-21 01:50:23.079393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:39.216 [2024-11-21 01:50:23.079400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:39.216 [2024-11-21 01:50:23.079407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:39.216 [2024-11-21 01:50:23.079414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:39.216 [2024-11-21 01:50:23.079421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:39.216 [2024-11-21 01:50:23.079428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:39.216 [2024-11-21 01:50:23.079435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:39.216 [2024-11-21 01:50:23.079441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:39.216 [2024-11-21 01:50:23.079448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:39.216 [2024-11-21 01:50:23.079454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:39.216 [2024-11-21 01:50:23.079461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:39.216 [2024-11-21 01:50:23.079468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:39.216 [2024-11-21 01:50:23.079475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:39.216 [2024-11-21 01:50:23.079481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:39.216 [2024-11-21 01:50:23.079487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:39.216 [2024-11-21 01:50:23.079494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:39.216 [2024-11-21 01:50:23.079500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:39.216 [2024-11-21 01:50:23.079508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:39.216 [2024-11-21 01:50:23.079514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:39.216 [2024-11-21 01:50:23.079520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:39.216 [2024-11-21 01:50:23.079527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:39.216 [2024-11-21 01:50:23.079533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:39.216 [2024-11-21 01:50:23.079539] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:39.216 [2024-11-21 01:50:23.079547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:39.217 [2024-11-21 01:50:23.079555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:39.217 [2024-11-21 01:50:23.079563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:39.217 [2024-11-21 01:50:23.079571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:39.217 [2024-11-21 01:50:23.079578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:39.217 [2024-11-21 01:50:23.079585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:39.217 [2024-11-21 01:50:23.079593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:39.217 [2024-11-21 01:50:23.079600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:39.217 [2024-11-21 01:50:23.079607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:39.217 [2024-11-21 01:50:23.079631] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:39.217 [2024-11-21 01:50:23.079640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:39.217 [2024-11-21 01:50:23.079650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:39.217 [2024-11-21 01:50:23.079661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:39.217 [2024-11-21 01:50:23.079668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:39.217 [2024-11-21 01:50:23.079676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:39.217 [2024-11-21 01:50:23.079684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:39.217 [2024-11-21 01:50:23.079691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:39.217 [2024-11-21 01:50:23.079699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:39.217 [2024-11-21 01:50:23.079706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:39.217 [2024-11-21 01:50:23.079714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:39.217 [2024-11-21 01:50:23.079721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:39.217 [2024-11-21 01:50:23.079729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:39.217 [2024-11-21 01:50:23.079737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:39.217 [2024-11-21 01:50:23.079745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:39.217 [2024-11-21 01:50:23.079753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:39.217 [2024-11-21 01:50:23.079760] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:39.217 [2024-11-21 01:50:23.079772] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:39.217 [2024-11-21 01:50:23.079781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:39.217 [2024-11-21 01:50:23.079788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:39.217 [2024-11-21 01:50:23.079796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:39.217 [2024-11-21 01:50:23.079803] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:39.217 [2024-11-21 01:50:23.079812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.217 [2024-11-21 01:50:23.079820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:39.217 [2024-11-21 01:50:23.079831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:24:39.217 [2024-11-21 01:50:23.079839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.217 [2024-11-21 01:50:23.111804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.217 [2024-11-21 01:50:23.111855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:39.217 [2024-11-21 01:50:23.111867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.920 ms 00:24:39.217 [2024-11-21 01:50:23.111876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.217 [2024-11-21 01:50:23.111974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.217 [2024-11-21 01:50:23.111984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:39.217 [2024-11-21 01:50:23.111993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:39.217 [2024-11-21 01:50:23.112001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.217 [2024-11-21 01:50:23.159821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.217 [2024-11-21 01:50:23.159893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:39.217 [2024-11-21 01:50:23.159908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.760 ms 00:24:39.217 [2024-11-21 01:50:23.159916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.217 [2024-11-21 01:50:23.159968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.217 [2024-11-21 01:50:23.159978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:39.217 [2024-11-21 01:50:23.159988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:39.217 [2024-11-21 01:50:23.160000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.217 [2024-11-21 01:50:23.160638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.217 [2024-11-21 01:50:23.160672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:39.217 [2024-11-21 01:50:23.160684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:24:39.217 [2024-11-21 01:50:23.160692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.217 [2024-11-21 01:50:23.160850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.217 [2024-11-21 01:50:23.160861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:39.217 [2024-11-21 01:50:23.160870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:24:39.217 [2024-11-21 01:50:23.160883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.478 [2024-11-21 01:50:23.176551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.479 [2024-11-21 01:50:23.176599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:39.479 [2024-11-21 01:50:23.176632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.647 ms 00:24:39.479 [2024-11-21 01:50:23.176641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.479 [2024-11-21 01:50:23.191167] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:39.479 [2024-11-21 01:50:23.191217] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:39.479 [2024-11-21 01:50:23.191231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.479 [2024-11-21 01:50:23.191240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:39.479 [2024-11-21 01:50:23.191250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.481 ms 00:24:39.479 [2024-11-21 01:50:23.191257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.479 [2024-11-21 01:50:23.217284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.479 [2024-11-21 01:50:23.217345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:39.479 [2024-11-21 01:50:23.217358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.971 ms 00:24:39.479 [2024-11-21 01:50:23.217367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.479 [2024-11-21 01:50:23.230246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.479 [2024-11-21 01:50:23.230302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:39.479 [2024-11-21 01:50:23.230314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.814 ms 00:24:39.479 [2024-11-21 01:50:23.230322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.479 [2024-11-21 01:50:23.242938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.479 [2024-11-21 01:50:23.242986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:39.479 [2024-11-21 01:50:23.242998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.568 ms 00:24:39.479 [2024-11-21 01:50:23.243005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.479 [2024-11-21 01:50:23.243665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.479 [2024-11-21 01:50:23.243693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:39.479 [2024-11-21 01:50:23.243703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:24:39.479 [2024-11-21 01:50:23.243715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.479 [2024-11-21 01:50:23.308315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.479 [2024-11-21 01:50:23.308378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:39.479 [2024-11-21 01:50:23.308402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.579 ms 00:24:39.479 [2024-11-21 01:50:23.308411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.479 [2024-11-21 01:50:23.319545] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:39.479 [2024-11-21 01:50:23.322568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.479 [2024-11-21 01:50:23.322625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:39.479 [2024-11-21 01:50:23.322639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.098 ms 00:24:39.479 [2024-11-21 01:50:23.322649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.479 [2024-11-21 01:50:23.322735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.479 [2024-11-21 01:50:23.322749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:39.479 [2024-11-21 01:50:23.322760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:39.479 [2024-11-21 01:50:23.322771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.479 [2024-11-21 01:50:23.324466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.479 [2024-11-21 01:50:23.324517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:39.479 [2024-11-21 01:50:23.324529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.656 ms 00:24:39.479 [2024-11-21 01:50:23.324537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.479 [2024-11-21 01:50:23.324567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.479 [2024-11-21 01:50:23.324576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:39.479 [2024-11-21 01:50:23.324586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:39.479 [2024-11-21 01:50:23.324594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.479 [2024-11-21 01:50:23.324655] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:39.479 [2024-11-21 01:50:23.324669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.479 [2024-11-21 01:50:23.324678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:39.479 [2024-11-21 01:50:23.324687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:39.479 [2024-11-21 01:50:23.324696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.479 [2024-11-21 01:50:23.349963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.479 [2024-11-21 01:50:23.350011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:39.479 [2024-11-21 01:50:23.350024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.249 ms 00:24:39.479 [2024-11-21 01:50:23.350038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.479 [2024-11-21 01:50:23.350127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.479 [2024-11-21 01:50:23.350137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:39.479 [2024-11-21 01:50:23.350147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:24:39.479 [2024-11-21 01:50:23.350155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.479 [2024-11-21 01:50:23.351422] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 302.133 ms, result 0 00:24:40.864  [2024-11-21T01:50:25.763Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-21T01:50:26.707Z] Copying: 24/1024 [MB] (10 MBps) [2024-11-21T01:50:27.704Z] Copying: 38/1024 [MB] (13 MBps) [2024-11-21T01:50:28.646Z] Copying: 56/1024 [MB] (18 MBps) [2024-11-21T01:50:29.587Z] Copying: 66/1024 [MB] (10 MBps) [2024-11-21T01:50:30.975Z] Copying: 76/1024 [MB] (10 MBps) [2024-11-21T01:50:31.547Z] Copying: 87/1024 [MB] (10 MBps) [2024-11-21T01:50:32.931Z] Copying: 97/1024 [MB] (10 MBps) [2024-11-21T01:50:33.876Z] Copying: 108/1024 [MB] (10 MBps) [2024-11-21T01:50:34.818Z] Copying: 119/1024 [MB] (10 MBps) [2024-11-21T01:50:35.760Z] Copying: 129/1024 [MB] (10 MBps) [2024-11-21T01:50:36.702Z] Copying: 140/1024 [MB] (10 MBps) [2024-11-21T01:50:37.645Z] Copying: 151/1024 [MB] (10 MBps) [2024-11-21T01:50:38.589Z] Copying: 162/1024 [MB] (10 MBps) [2024-11-21T01:50:39.974Z] Copying: 173/1024 [MB] (10 MBps) [2024-11-21T01:50:40.555Z] Copying: 183/1024 [MB] (10 MBps) [2024-11-21T01:50:41.939Z] Copying: 193/1024 [MB] (10 MBps) [2024-11-21T01:50:42.882Z] Copying: 204/1024 [MB] (10 MBps) [2024-11-21T01:50:43.824Z] Copying: 214/1024 [MB] (10 MBps) [2024-11-21T01:50:44.767Z] Copying: 224/1024 [MB] (10 MBps) [2024-11-21T01:50:45.704Z] Copying: 235/1024 [MB] (10 MBps) [2024-11-21T01:50:46.647Z] Copying: 246/1024 [MB] (10 MBps) [2024-11-21T01:50:47.589Z] Copying: 257/1024 [MB] (10 MBps) [2024-11-21T01:50:48.975Z] Copying: 268/1024 [MB] (10 MBps) [2024-11-21T01:50:49.546Z] Copying: 279/1024 [MB] (11 MBps) [2024-11-21T01:50:50.929Z] Copying: 290/1024 [MB] (11 MBps) [2024-11-21T01:50:51.873Z] Copying: 301/1024 [MB] (10 MBps) [2024-11-21T01:50:52.814Z] Copying: 312/1024 [MB] (11 MBps) [2024-11-21T01:50:53.756Z] Copying: 323/1024 [MB] (10 MBps) [2024-11-21T01:50:54.697Z] Copying: 334/1024 [MB] (11 MBps) [2024-11-21T01:50:55.641Z] Copying: 345/1024 [MB] (10 MBps) [2024-11-21T01:50:56.584Z] Copying: 358/1024 [MB] (12 MBps) [2024-11-21T01:50:57.970Z] Copying: 370/1024 [MB] (12 MBps) [2024-11-21T01:50:58.912Z] Copying: 386/1024 [MB] (15 MBps) [2024-11-21T01:50:59.563Z] Copying: 399/1024 [MB] (12 MBps) [2024-11-21T01:51:00.955Z] Copying: 410/1024 [MB] (10 MBps) [2024-11-21T01:51:01.897Z] Copying: 436/1024 [MB] (25 MBps) [2024-11-21T01:51:02.843Z] Copying: 462/1024 [MB] (25 MBps) [2024-11-21T01:51:03.787Z] Copying: 480/1024 [MB] (17 MBps) [2024-11-21T01:51:04.732Z] Copying: 497/1024 [MB] (17 MBps) [2024-11-21T01:51:05.677Z] Copying: 515/1024 [MB] (18 MBps) [2024-11-21T01:51:06.618Z] Copying: 537/1024 [MB] (22 MBps) [2024-11-21T01:51:07.561Z] Copying: 558/1024 [MB] (20 MBps) [2024-11-21T01:51:08.944Z] Copying: 576/1024 [MB] (18 MBps) [2024-11-21T01:51:09.884Z] Copying: 592/1024 [MB] (15 MBps) [2024-11-21T01:51:10.825Z] Copying: 606/1024 [MB] (14 MBps) [2024-11-21T01:51:11.765Z] Copying: 630/1024 [MB] (23 MBps) [2024-11-21T01:51:12.703Z] Copying: 652/1024 [MB] (22 MBps) [2024-11-21T01:51:13.644Z] Copying: 670/1024 [MB] (18 MBps) [2024-11-21T01:51:14.584Z] Copying: 691/1024 [MB] (20 MBps) [2024-11-21T01:51:15.970Z] Copying: 707/1024 [MB] (15 MBps) [2024-11-21T01:51:16.913Z] Copying: 719/1024 [MB] (12 MBps) [2024-11-21T01:51:17.855Z] Copying: 730/1024 [MB] (11 MBps) [2024-11-21T01:51:18.799Z] Copying: 741/1024 [MB] (10 MBps) [2024-11-21T01:51:19.738Z] Copying: 752/1024 [MB] (11 MBps) [2024-11-21T01:51:20.681Z] Copying: 763/1024 [MB] (10 MBps) [2024-11-21T01:51:21.621Z] Copying: 774/1024 [MB] (10 MBps) [2024-11-21T01:51:22.563Z] Copying: 784/1024 [MB] (10 MBps) [2024-11-21T01:51:23.949Z] Copying: 795/1024 [MB] (10 MBps) [2024-11-21T01:51:24.892Z] Copying: 821/1024 [MB] (26 MBps) [2024-11-21T01:51:25.835Z] Copying: 835/1024 [MB] (14 MBps) [2024-11-21T01:51:26.778Z] Copying: 850/1024 [MB] (15 MBps) [2024-11-21T01:51:27.720Z] Copying: 864/1024 [MB] (13 MBps) [2024-11-21T01:51:28.663Z] Copying: 881/1024 [MB] (16 MBps) [2024-11-21T01:51:29.606Z] Copying: 895/1024 [MB] (13 MBps) [2024-11-21T01:51:30.571Z] Copying: 909/1024 [MB] (14 MBps) [2024-11-21T01:51:31.552Z] Copying: 928/1024 [MB] (18 MBps) [2024-11-21T01:51:32.941Z] Copying: 946/1024 [MB] (17 MBps) [2024-11-21T01:51:33.883Z] Copying: 962/1024 [MB] (15 MBps) [2024-11-21T01:51:34.827Z] Copying: 973/1024 [MB] (11 MBps) [2024-11-21T01:51:35.772Z] Copying: 988/1024 [MB] (14 MBps) [2024-11-21T01:51:36.715Z] Copying: 1010/1024 [MB] (22 MBps) [2024-11-21T01:51:36.715Z] Copying: 1023/1024 [MB] (12 MBps) [2024-11-21T01:51:36.978Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-21 01:51:36.946118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.021 [2024-11-21 01:51:36.946188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:53.021 [2024-11-21 01:51:36.946205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:53.021 [2024-11-21 01:51:36.946214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.021 [2024-11-21 01:51:36.946248] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:53.021 [2024-11-21 01:51:36.949419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.021 [2024-11-21 01:51:36.949463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:53.021 [2024-11-21 01:51:36.949475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.154 ms 00:25:53.021 [2024-11-21 01:51:36.949484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.021 [2024-11-21 01:51:36.949747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.021 [2024-11-21 01:51:36.949760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:53.021 [2024-11-21 01:51:36.949770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:25:53.021 [2024-11-21 01:51:36.949778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.021 [2024-11-21 01:51:36.955788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.021 [2024-11-21 01:51:36.955834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:53.021 [2024-11-21 01:51:36.955846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.986 ms 00:25:53.021 [2024-11-21 01:51:36.955855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.021 [2024-11-21 01:51:36.963043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.021 [2024-11-21 01:51:36.963089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:53.021 [2024-11-21 01:51:36.963100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.145 ms 00:25:53.021 [2024-11-21 01:51:36.963110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.283 [2024-11-21 01:51:36.991588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.283 [2024-11-21 01:51:36.991649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:53.283 [2024-11-21 01:51:36.991662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.404 ms 00:25:53.283 [2024-11-21 01:51:36.991670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.283 [2024-11-21 01:51:37.008437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.283 [2024-11-21 01:51:37.008493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:53.283 [2024-11-21 01:51:37.008506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.719 ms 00:25:53.283 [2024-11-21 01:51:37.008515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.546 [2024-11-21 01:51:37.367724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.546 [2024-11-21 01:51:37.367776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:53.546 [2024-11-21 01:51:37.367788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 359.153 ms 00:25:53.546 [2024-11-21 01:51:37.367798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.546 [2024-11-21 01:51:37.392974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.546 [2024-11-21 01:51:37.393024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:53.546 [2024-11-21 01:51:37.393037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.160 ms 00:25:53.546 [2024-11-21 01:51:37.393046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.546 [2024-11-21 01:51:37.418333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.546 [2024-11-21 01:51:37.418400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:53.546 [2024-11-21 01:51:37.418424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.241 ms 00:25:53.546 [2024-11-21 01:51:37.418432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.546 [2024-11-21 01:51:37.442592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.546 [2024-11-21 01:51:37.442651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:53.546 [2024-11-21 01:51:37.442663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.114 ms 00:25:53.546 [2024-11-21 01:51:37.442671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.546 [2024-11-21 01:51:37.466708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.546 [2024-11-21 01:51:37.466758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:53.546 [2024-11-21 01:51:37.466770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.966 ms 00:25:53.546 [2024-11-21 01:51:37.466777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.546 [2024-11-21 01:51:37.466822] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:53.546 [2024-11-21 01:51:37.466838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:25:53.546 [2024-11-21 01:51:37.466849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:53.546 [2024-11-21 01:51:37.466857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:53.546 [2024-11-21 01:51:37.466866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:53.546 [2024-11-21 01:51:37.466874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:53.546 [2024-11-21 01:51:37.466882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:53.546 [2024-11-21 01:51:37.466890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:53.546 [2024-11-21 01:51:37.466898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:53.546 [2024-11-21 01:51:37.466905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:53.546 [2024-11-21 01:51:37.466914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:53.546 [2024-11-21 01:51:37.466922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:53.546 [2024-11-21 01:51:37.466934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:53.546 [2024-11-21 01:51:37.466941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:53.546 [2024-11-21 01:51:37.466949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:53.546 [2024-11-21 01:51:37.466956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.466964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.466971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.466979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.466987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.466994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:53.547 [2024-11-21 01:51:37.467663] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:53.547 [2024-11-21 01:51:37.467672] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f555680a-895d-4d20-a79f-de599ad6b77b 00:25:53.547 [2024-11-21 01:51:37.467681] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:25:53.547 [2024-11-21 01:51:37.467689] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 27840 00:25:53.547 [2024-11-21 01:51:37.467698] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 26880 00:25:53.547 [2024-11-21 01:51:37.467707] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0357 00:25:53.548 [2024-11-21 01:51:37.467715] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:53.548 [2024-11-21 01:51:37.467728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:53.548 [2024-11-21 01:51:37.467736] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:53.548 [2024-11-21 01:51:37.467750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:53.548 [2024-11-21 01:51:37.467757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:53.548 [2024-11-21 01:51:37.467765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.548 [2024-11-21 01:51:37.467773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:53.548 [2024-11-21 01:51:37.467784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:25:53.548 [2024-11-21 01:51:37.467793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.548 [2024-11-21 01:51:37.481371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.548 [2024-11-21 01:51:37.481423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:53.548 [2024-11-21 01:51:37.481435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.560 ms 00:25:53.548 [2024-11-21 01:51:37.481449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.548 [2024-11-21 01:51:37.481877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.548 [2024-11-21 01:51:37.481902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:53.548 [2024-11-21 01:51:37.481912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:25:53.548 [2024-11-21 01:51:37.481920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.810 [2024-11-21 01:51:37.518152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.810 [2024-11-21 01:51:37.518202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:53.810 [2024-11-21 01:51:37.518219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.810 [2024-11-21 01:51:37.518227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.810 [2024-11-21 01:51:37.518290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.810 [2024-11-21 01:51:37.518299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:53.810 [2024-11-21 01:51:37.518308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.810 [2024-11-21 01:51:37.518316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.810 [2024-11-21 01:51:37.518375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.810 [2024-11-21 01:51:37.518387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:53.810 [2024-11-21 01:51:37.518396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.810 [2024-11-21 01:51:37.518408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.810 [2024-11-21 01:51:37.518424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.810 [2024-11-21 01:51:37.518432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:53.810 [2024-11-21 01:51:37.518440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.810 [2024-11-21 01:51:37.518449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.810 [2024-11-21 01:51:37.601417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.810 [2024-11-21 01:51:37.601479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:53.810 [2024-11-21 01:51:37.601498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.810 [2024-11-21 01:51:37.601507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.810 [2024-11-21 01:51:37.669646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.810 [2024-11-21 01:51:37.669706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:53.810 [2024-11-21 01:51:37.669719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.810 [2024-11-21 01:51:37.669728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.810 [2024-11-21 01:51:37.669813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.810 [2024-11-21 01:51:37.669824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:53.810 [2024-11-21 01:51:37.669833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.810 [2024-11-21 01:51:37.669842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.810 [2024-11-21 01:51:37.669886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.810 [2024-11-21 01:51:37.669897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:53.810 [2024-11-21 01:51:37.669906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.810 [2024-11-21 01:51:37.669914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.810 [2024-11-21 01:51:37.670009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.810 [2024-11-21 01:51:37.670019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:53.810 [2024-11-21 01:51:37.670028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.810 [2024-11-21 01:51:37.670036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.810 [2024-11-21 01:51:37.670070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.810 [2024-11-21 01:51:37.670080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:53.810 [2024-11-21 01:51:37.670088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.810 [2024-11-21 01:51:37.670097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.810 [2024-11-21 01:51:37.670139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.810 [2024-11-21 01:51:37.670149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:53.810 [2024-11-21 01:51:37.670158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.810 [2024-11-21 01:51:37.670166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.810 [2024-11-21 01:51:37.670213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:53.810 [2024-11-21 01:51:37.670224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:53.810 [2024-11-21 01:51:37.670233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:53.810 [2024-11-21 01:51:37.670241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.810 [2024-11-21 01:51:37.670375] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 724.218 ms, result 0 00:25:54.754 00:25:54.755 00:25:54.755 01:51:38 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:56.667 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:56.667 01:51:40 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:56.667 01:51:40 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:56.667 01:51:40 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:56.926 01:51:40 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:56.926 01:51:40 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:56.926 Process with pid 77200 is not found 00:25:56.926 Remove shared memory files 00:25:56.926 01:51:40 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77200 00:25:56.926 01:51:40 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77200 ']' 00:25:56.926 01:51:40 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77200 00:25:56.926 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77200) - No such process 00:25:56.926 01:51:40 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77200 is not found' 00:25:56.926 01:51:40 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:56.926 01:51:40 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:56.926 01:51:40 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:56.926 01:51:40 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:56.926 01:51:40 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:56.926 01:51:40 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:56.926 01:51:40 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:56.926 00:25:56.926 real 4m49.588s 00:25:56.926 user 4m36.816s 00:25:56.926 sys 0m12.434s 00:25:56.926 01:51:40 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:56.926 ************************************ 00:25:56.926 END TEST ftl_restore 00:25:56.926 ************************************ 00:25:56.926 01:51:40 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:56.926 01:51:40 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:56.926 01:51:40 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:56.926 01:51:40 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:56.926 01:51:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:56.926 ************************************ 00:25:56.926 START TEST ftl_dirty_shutdown 00:25:56.926 ************************************ 00:25:56.926 01:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:56.926 * Looking for test storage... 00:25:56.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:56.926 01:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:56.926 01:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:25:56.926 01:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:56.926 01:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:56.926 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:56.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.927 --rc genhtml_branch_coverage=1 00:25:56.927 --rc genhtml_function_coverage=1 00:25:56.927 --rc genhtml_legend=1 00:25:56.927 --rc geninfo_all_blocks=1 00:25:56.927 --rc geninfo_unexecuted_blocks=1 00:25:56.927 00:25:56.927 ' 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:56.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.927 --rc genhtml_branch_coverage=1 00:25:56.927 --rc genhtml_function_coverage=1 00:25:56.927 --rc genhtml_legend=1 00:25:56.927 --rc geninfo_all_blocks=1 00:25:56.927 --rc geninfo_unexecuted_blocks=1 00:25:56.927 00:25:56.927 ' 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:56.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.927 --rc genhtml_branch_coverage=1 00:25:56.927 --rc genhtml_function_coverage=1 00:25:56.927 --rc genhtml_legend=1 00:25:56.927 --rc geninfo_all_blocks=1 00:25:56.927 --rc geninfo_unexecuted_blocks=1 00:25:56.927 00:25:56.927 ' 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:56.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:56.927 --rc genhtml_branch_coverage=1 00:25:56.927 --rc genhtml_function_coverage=1 00:25:56.927 --rc genhtml_legend=1 00:25:56.927 --rc geninfo_all_blocks=1 00:25:56.927 --rc geninfo_unexecuted_blocks=1 00:25:56.927 00:25:56.927 ' 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:56.927 01:51:40 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:57.191 01:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:57.191 01:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:57.191 01:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:57.191 01:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:57.191 01:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:57.192 01:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:57.192 01:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:57.192 01:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:57.192 01:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:57.192 01:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:57.192 01:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:57.192 01:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:57.192 01:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:57.192 01:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=80255 00:25:57.192 01:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 80255 00:25:57.192 01:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80255 ']' 00:25:57.192 01:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.192 01:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:57.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.192 01:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.192 01:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:57.192 01:51:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:57.192 01:51:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:57.192 [2024-11-21 01:51:40.984800] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:25:57.192 [2024-11-21 01:51:40.984963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80255 ] 00:25:57.451 [2024-11-21 01:51:41.150854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.451 [2024-11-21 01:51:41.236132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.019 01:51:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.019 01:51:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:58.019 01:51:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:58.019 01:51:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:58.019 01:51:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:58.019 01:51:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:58.019 01:51:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:58.019 01:51:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:58.279 01:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:58.279 01:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:58.279 01:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:58.279 01:51:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:58.279 01:51:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:58.279 01:51:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:58.279 01:51:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:58.279 01:51:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:58.539 01:51:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:58.539 { 00:25:58.539 "name": "nvme0n1", 00:25:58.539 "aliases": [ 00:25:58.539 "d9320fd3-e4de-40c9-9f80-6b5c347043f7" 00:25:58.539 ], 00:25:58.539 "product_name": "NVMe disk", 00:25:58.539 "block_size": 4096, 00:25:58.539 "num_blocks": 1310720, 00:25:58.539 "uuid": "d9320fd3-e4de-40c9-9f80-6b5c347043f7", 00:25:58.539 "numa_id": -1, 00:25:58.539 "assigned_rate_limits": { 00:25:58.539 "rw_ios_per_sec": 0, 00:25:58.539 "rw_mbytes_per_sec": 0, 00:25:58.539 "r_mbytes_per_sec": 0, 00:25:58.539 "w_mbytes_per_sec": 0 00:25:58.539 }, 00:25:58.539 "claimed": true, 00:25:58.539 "claim_type": "read_many_write_one", 00:25:58.539 "zoned": false, 00:25:58.539 "supported_io_types": { 00:25:58.539 "read": true, 00:25:58.539 "write": true, 00:25:58.539 "unmap": true, 00:25:58.539 "flush": true, 00:25:58.539 "reset": true, 00:25:58.539 "nvme_admin": true, 00:25:58.539 "nvme_io": true, 00:25:58.539 "nvme_io_md": false, 00:25:58.539 "write_zeroes": true, 00:25:58.539 "zcopy": false, 00:25:58.539 "get_zone_info": false, 00:25:58.539 "zone_management": false, 00:25:58.539 "zone_append": false, 00:25:58.539 "compare": true, 00:25:58.539 "compare_and_write": false, 00:25:58.539 "abort": true, 00:25:58.539 "seek_hole": false, 00:25:58.539 "seek_data": false, 00:25:58.539 "copy": true, 00:25:58.539 "nvme_iov_md": false 00:25:58.539 }, 00:25:58.539 "driver_specific": { 00:25:58.539 "nvme": [ 00:25:58.539 { 00:25:58.539 "pci_address": "0000:00:11.0", 00:25:58.539 "trid": { 00:25:58.539 "trtype": "PCIe", 00:25:58.539 "traddr": "0000:00:11.0" 00:25:58.539 }, 00:25:58.539 "ctrlr_data": { 00:25:58.539 "cntlid": 0, 00:25:58.539 "vendor_id": "0x1b36", 00:25:58.539 "model_number": "QEMU NVMe Ctrl", 00:25:58.539 "serial_number": "12341", 00:25:58.539 "firmware_revision": "8.0.0", 00:25:58.539 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:58.539 "oacs": { 00:25:58.539 "security": 0, 00:25:58.539 "format": 1, 00:25:58.539 "firmware": 0, 00:25:58.539 "ns_manage": 1 00:25:58.539 }, 00:25:58.540 "multi_ctrlr": false, 00:25:58.540 "ana_reporting": false 00:25:58.540 }, 00:25:58.540 "vs": { 00:25:58.540 "nvme_version": "1.4" 00:25:58.540 }, 00:25:58.540 "ns_data": { 00:25:58.540 "id": 1, 00:25:58.540 "can_share": false 00:25:58.540 } 00:25:58.540 } 00:25:58.540 ], 00:25:58.540 "mp_policy": "active_passive" 00:25:58.540 } 00:25:58.540 } 00:25:58.540 ]' 00:25:58.540 01:51:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:58.540 01:51:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:58.540 01:51:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:58.540 01:51:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:58.540 01:51:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:58.540 01:51:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:58.540 01:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:58.540 01:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:58.540 01:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:58.540 01:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:58.540 01:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:58.801 01:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=342d81aa-c9f5-410e-8bbc-840dc7fcdd1c 00:25:58.801 01:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:58.801 01:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 342d81aa-c9f5-410e-8bbc-840dc7fcdd1c 00:25:59.062 01:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:59.062 01:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=5dc85a8b-0ef0-4ec0-9e96-3da7455f2152 00:25:59.062 01:51:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5dc85a8b-0ef0-4ec0-9e96-3da7455f2152 00:25:59.322 01:51:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=fdf39c88-1b86-48fb-83c9-694605c33816 00:25:59.322 01:51:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:59.322 01:51:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fdf39c88-1b86-48fb-83c9-694605c33816 00:25:59.322 01:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:59.322 01:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:59.322 01:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=fdf39c88-1b86-48fb-83c9-694605c33816 00:25:59.322 01:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:59.322 01:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size fdf39c88-1b86-48fb-83c9-694605c33816 00:25:59.322 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=fdf39c88-1b86-48fb-83c9-694605c33816 00:25:59.322 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:59.322 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:59.322 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:59.322 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fdf39c88-1b86-48fb-83c9-694605c33816 00:25:59.584 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:59.584 { 00:25:59.584 "name": "fdf39c88-1b86-48fb-83c9-694605c33816", 00:25:59.584 "aliases": [ 00:25:59.584 "lvs/nvme0n1p0" 00:25:59.584 ], 00:25:59.584 "product_name": "Logical Volume", 00:25:59.584 "block_size": 4096, 00:25:59.584 "num_blocks": 26476544, 00:25:59.584 "uuid": "fdf39c88-1b86-48fb-83c9-694605c33816", 00:25:59.584 "assigned_rate_limits": { 00:25:59.584 "rw_ios_per_sec": 0, 00:25:59.584 "rw_mbytes_per_sec": 0, 00:25:59.584 "r_mbytes_per_sec": 0, 00:25:59.584 "w_mbytes_per_sec": 0 00:25:59.584 }, 00:25:59.584 "claimed": false, 00:25:59.584 "zoned": false, 00:25:59.584 "supported_io_types": { 00:25:59.584 "read": true, 00:25:59.584 "write": true, 00:25:59.584 "unmap": true, 00:25:59.584 "flush": false, 00:25:59.584 "reset": true, 00:25:59.584 "nvme_admin": false, 00:25:59.584 "nvme_io": false, 00:25:59.584 "nvme_io_md": false, 00:25:59.584 "write_zeroes": true, 00:25:59.584 "zcopy": false, 00:25:59.584 "get_zone_info": false, 00:25:59.584 "zone_management": false, 00:25:59.584 "zone_append": false, 00:25:59.584 "compare": false, 00:25:59.584 "compare_and_write": false, 00:25:59.584 "abort": false, 00:25:59.584 "seek_hole": true, 00:25:59.584 "seek_data": true, 00:25:59.584 "copy": false, 00:25:59.584 "nvme_iov_md": false 00:25:59.584 }, 00:25:59.584 "driver_specific": { 00:25:59.584 "lvol": { 00:25:59.584 "lvol_store_uuid": "5dc85a8b-0ef0-4ec0-9e96-3da7455f2152", 00:25:59.584 "base_bdev": "nvme0n1", 00:25:59.584 "thin_provision": true, 00:25:59.584 "num_allocated_clusters": 0, 00:25:59.584 "snapshot": false, 00:25:59.584 "clone": false, 00:25:59.584 "esnap_clone": false 00:25:59.584 } 00:25:59.584 } 00:25:59.584 } 00:25:59.584 ]' 00:25:59.584 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:59.584 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:59.584 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:59.584 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:59.584 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:59.584 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:59.584 01:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:59.584 01:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:59.584 01:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:59.846 01:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:59.846 01:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:59.846 01:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size fdf39c88-1b86-48fb-83c9-694605c33816 00:25:59.846 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=fdf39c88-1b86-48fb-83c9-694605c33816 00:25:59.846 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:59.846 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:59.846 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:59.846 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fdf39c88-1b86-48fb-83c9-694605c33816 00:26:00.108 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:00.108 { 00:26:00.108 "name": "fdf39c88-1b86-48fb-83c9-694605c33816", 00:26:00.108 "aliases": [ 00:26:00.108 "lvs/nvme0n1p0" 00:26:00.108 ], 00:26:00.108 "product_name": "Logical Volume", 00:26:00.108 "block_size": 4096, 00:26:00.108 "num_blocks": 26476544, 00:26:00.108 "uuid": "fdf39c88-1b86-48fb-83c9-694605c33816", 00:26:00.108 "assigned_rate_limits": { 00:26:00.108 "rw_ios_per_sec": 0, 00:26:00.108 "rw_mbytes_per_sec": 0, 00:26:00.108 "r_mbytes_per_sec": 0, 00:26:00.108 "w_mbytes_per_sec": 0 00:26:00.108 }, 00:26:00.108 "claimed": false, 00:26:00.108 "zoned": false, 00:26:00.108 "supported_io_types": { 00:26:00.108 "read": true, 00:26:00.108 "write": true, 00:26:00.108 "unmap": true, 00:26:00.108 "flush": false, 00:26:00.108 "reset": true, 00:26:00.108 "nvme_admin": false, 00:26:00.108 "nvme_io": false, 00:26:00.108 "nvme_io_md": false, 00:26:00.108 "write_zeroes": true, 00:26:00.108 "zcopy": false, 00:26:00.108 "get_zone_info": false, 00:26:00.108 "zone_management": false, 00:26:00.108 "zone_append": false, 00:26:00.108 "compare": false, 00:26:00.108 "compare_and_write": false, 00:26:00.108 "abort": false, 00:26:00.108 "seek_hole": true, 00:26:00.108 "seek_data": true, 00:26:00.108 "copy": false, 00:26:00.108 "nvme_iov_md": false 00:26:00.108 }, 00:26:00.108 "driver_specific": { 00:26:00.108 "lvol": { 00:26:00.108 "lvol_store_uuid": "5dc85a8b-0ef0-4ec0-9e96-3da7455f2152", 00:26:00.108 "base_bdev": "nvme0n1", 00:26:00.108 "thin_provision": true, 00:26:00.108 "num_allocated_clusters": 0, 00:26:00.108 "snapshot": false, 00:26:00.108 "clone": false, 00:26:00.108 "esnap_clone": false 00:26:00.108 } 00:26:00.108 } 00:26:00.108 } 00:26:00.108 ]' 00:26:00.108 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:00.108 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:00.108 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:00.108 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:00.108 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:00.108 01:51:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:00.108 01:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:26:00.108 01:51:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:00.369 01:51:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:26:00.369 01:51:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size fdf39c88-1b86-48fb-83c9-694605c33816 00:26:00.369 01:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=fdf39c88-1b86-48fb-83c9-694605c33816 00:26:00.369 01:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:00.369 01:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:00.369 01:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:00.369 01:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fdf39c88-1b86-48fb-83c9-694605c33816 00:26:00.630 01:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:00.630 { 00:26:00.630 "name": "fdf39c88-1b86-48fb-83c9-694605c33816", 00:26:00.630 "aliases": [ 00:26:00.630 "lvs/nvme0n1p0" 00:26:00.630 ], 00:26:00.630 "product_name": "Logical Volume", 00:26:00.630 "block_size": 4096, 00:26:00.630 "num_blocks": 26476544, 00:26:00.630 "uuid": "fdf39c88-1b86-48fb-83c9-694605c33816", 00:26:00.630 "assigned_rate_limits": { 00:26:00.630 "rw_ios_per_sec": 0, 00:26:00.630 "rw_mbytes_per_sec": 0, 00:26:00.630 "r_mbytes_per_sec": 0, 00:26:00.630 "w_mbytes_per_sec": 0 00:26:00.630 }, 00:26:00.630 "claimed": false, 00:26:00.630 "zoned": false, 00:26:00.630 "supported_io_types": { 00:26:00.630 "read": true, 00:26:00.630 "write": true, 00:26:00.630 "unmap": true, 00:26:00.630 "flush": false, 00:26:00.630 "reset": true, 00:26:00.630 "nvme_admin": false, 00:26:00.630 "nvme_io": false, 00:26:00.630 "nvme_io_md": false, 00:26:00.630 "write_zeroes": true, 00:26:00.630 "zcopy": false, 00:26:00.630 "get_zone_info": false, 00:26:00.630 "zone_management": false, 00:26:00.630 "zone_append": false, 00:26:00.630 "compare": false, 00:26:00.630 "compare_and_write": false, 00:26:00.630 "abort": false, 00:26:00.630 "seek_hole": true, 00:26:00.630 "seek_data": true, 00:26:00.630 "copy": false, 00:26:00.630 "nvme_iov_md": false 00:26:00.630 }, 00:26:00.630 "driver_specific": { 00:26:00.630 "lvol": { 00:26:00.630 "lvol_store_uuid": "5dc85a8b-0ef0-4ec0-9e96-3da7455f2152", 00:26:00.630 "base_bdev": "nvme0n1", 00:26:00.630 "thin_provision": true, 00:26:00.630 "num_allocated_clusters": 0, 00:26:00.630 "snapshot": false, 00:26:00.630 "clone": false, 00:26:00.630 "esnap_clone": false 00:26:00.630 } 00:26:00.630 } 00:26:00.630 } 00:26:00.630 ]' 00:26:00.630 01:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:00.630 01:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:00.630 01:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:00.630 01:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:00.630 01:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:00.630 01:51:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:00.630 01:51:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:26:00.630 01:51:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d fdf39c88-1b86-48fb-83c9-694605c33816 --l2p_dram_limit 10' 00:26:00.630 01:51:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:26:00.630 01:51:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:26:00.630 01:51:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:00.630 01:51:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fdf39c88-1b86-48fb-83c9-694605c33816 --l2p_dram_limit 10 -c nvc0n1p0 00:26:00.892 [2024-11-21 01:51:44.635944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.892 [2024-11-21 01:51:44.635978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:00.892 [2024-11-21 01:51:44.635991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:00.892 [2024-11-21 01:51:44.635997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.892 [2024-11-21 01:51:44.636040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.892 [2024-11-21 01:51:44.636048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:00.892 [2024-11-21 01:51:44.636056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:00.892 [2024-11-21 01:51:44.636062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.892 [2024-11-21 01:51:44.636081] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:00.892 [2024-11-21 01:51:44.636643] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:00.892 [2024-11-21 01:51:44.636660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.892 [2024-11-21 01:51:44.636666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:00.892 [2024-11-21 01:51:44.636674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:26:00.892 [2024-11-21 01:51:44.636680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.892 [2024-11-21 01:51:44.636720] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5b817a20-1913-404f-8c83-c65c9c250ee0 00:26:00.892 [2024-11-21 01:51:44.637690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.893 [2024-11-21 01:51:44.637708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:00.893 [2024-11-21 01:51:44.637715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:00.893 [2024-11-21 01:51:44.637722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.893 [2024-11-21 01:51:44.642413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.893 [2024-11-21 01:51:44.642440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:00.893 [2024-11-21 01:51:44.642450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.637 ms 00:26:00.893 [2024-11-21 01:51:44.642457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.893 [2024-11-21 01:51:44.642522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.893 [2024-11-21 01:51:44.642530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:00.893 [2024-11-21 01:51:44.642537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:00.893 [2024-11-21 01:51:44.642546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.893 [2024-11-21 01:51:44.642577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.893 [2024-11-21 01:51:44.642585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:00.893 [2024-11-21 01:51:44.642591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:00.893 [2024-11-21 01:51:44.642600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.893 [2024-11-21 01:51:44.642628] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:00.893 [2024-11-21 01:51:44.645417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.893 [2024-11-21 01:51:44.645440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:00.893 [2024-11-21 01:51:44.645448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.792 ms 00:26:00.893 [2024-11-21 01:51:44.645454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.893 [2024-11-21 01:51:44.645479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.893 [2024-11-21 01:51:44.645485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:00.893 [2024-11-21 01:51:44.645493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:00.893 [2024-11-21 01:51:44.645498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.893 [2024-11-21 01:51:44.645518] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:00.893 [2024-11-21 01:51:44.645630] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:00.893 [2024-11-21 01:51:44.645646] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:00.893 [2024-11-21 01:51:44.645654] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:00.893 [2024-11-21 01:51:44.645663] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:00.893 [2024-11-21 01:51:44.645670] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:00.893 [2024-11-21 01:51:44.645678] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:00.893 [2024-11-21 01:51:44.645684] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:00.893 [2024-11-21 01:51:44.645692] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:00.893 [2024-11-21 01:51:44.645697] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:00.893 [2024-11-21 01:51:44.645704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.893 [2024-11-21 01:51:44.645710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:00.893 [2024-11-21 01:51:44.645717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:26:00.893 [2024-11-21 01:51:44.645729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.893 [2024-11-21 01:51:44.645794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.893 [2024-11-21 01:51:44.645800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:00.893 [2024-11-21 01:51:44.645807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:26:00.893 [2024-11-21 01:51:44.645813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.893 [2024-11-21 01:51:44.645892] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:00.893 [2024-11-21 01:51:44.645899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:00.893 [2024-11-21 01:51:44.645906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:00.893 [2024-11-21 01:51:44.645912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.893 [2024-11-21 01:51:44.645919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:00.893 [2024-11-21 01:51:44.645924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:00.893 [2024-11-21 01:51:44.645931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:00.893 [2024-11-21 01:51:44.645936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:00.893 [2024-11-21 01:51:44.645942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:00.893 [2024-11-21 01:51:44.645947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:00.893 [2024-11-21 01:51:44.645954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:00.893 [2024-11-21 01:51:44.645959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:00.893 [2024-11-21 01:51:44.645965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:00.893 [2024-11-21 01:51:44.645970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:00.893 [2024-11-21 01:51:44.645978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:00.893 [2024-11-21 01:51:44.645983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.893 [2024-11-21 01:51:44.645992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:00.893 [2024-11-21 01:51:44.645997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:00.893 [2024-11-21 01:51:44.646003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.893 [2024-11-21 01:51:44.646009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:00.893 [2024-11-21 01:51:44.646016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:00.893 [2024-11-21 01:51:44.646021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.893 [2024-11-21 01:51:44.646027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:00.893 [2024-11-21 01:51:44.646032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:00.893 [2024-11-21 01:51:44.646038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.893 [2024-11-21 01:51:44.646043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:00.893 [2024-11-21 01:51:44.646049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:00.893 [2024-11-21 01:51:44.646054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.893 [2024-11-21 01:51:44.646060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:00.893 [2024-11-21 01:51:44.646065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:00.893 [2024-11-21 01:51:44.646071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:00.893 [2024-11-21 01:51:44.646076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:00.893 [2024-11-21 01:51:44.646084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:00.893 [2024-11-21 01:51:44.646088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:00.893 [2024-11-21 01:51:44.646094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:00.893 [2024-11-21 01:51:44.646099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:00.893 [2024-11-21 01:51:44.646106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:00.893 [2024-11-21 01:51:44.646111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:00.893 [2024-11-21 01:51:44.646118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:00.893 [2024-11-21 01:51:44.646122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.893 [2024-11-21 01:51:44.646129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:00.893 [2024-11-21 01:51:44.646133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:00.893 [2024-11-21 01:51:44.646140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.893 [2024-11-21 01:51:44.646144] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:00.893 [2024-11-21 01:51:44.646152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:00.893 [2024-11-21 01:51:44.646157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:00.893 [2024-11-21 01:51:44.646165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:00.893 [2024-11-21 01:51:44.646171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:00.893 [2024-11-21 01:51:44.646179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:00.893 [2024-11-21 01:51:44.646184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:00.893 [2024-11-21 01:51:44.646191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:00.893 [2024-11-21 01:51:44.646196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:00.893 [2024-11-21 01:51:44.646203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:00.893 [2024-11-21 01:51:44.646210] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:00.893 [2024-11-21 01:51:44.646219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:00.893 [2024-11-21 01:51:44.646227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:00.893 [2024-11-21 01:51:44.646234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:00.893 [2024-11-21 01:51:44.646239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:00.893 [2024-11-21 01:51:44.646246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:00.893 [2024-11-21 01:51:44.646251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:00.893 [2024-11-21 01:51:44.646258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:00.894 [2024-11-21 01:51:44.646263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:00.894 [2024-11-21 01:51:44.646269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:00.894 [2024-11-21 01:51:44.646275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:00.894 [2024-11-21 01:51:44.646282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:00.894 [2024-11-21 01:51:44.646288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:00.894 [2024-11-21 01:51:44.646295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:00.894 [2024-11-21 01:51:44.646300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:00.894 [2024-11-21 01:51:44.646307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:00.894 [2024-11-21 01:51:44.646312] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:00.894 [2024-11-21 01:51:44.646320] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:00.894 [2024-11-21 01:51:44.646326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:00.894 [2024-11-21 01:51:44.646333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:00.894 [2024-11-21 01:51:44.646338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:00.894 [2024-11-21 01:51:44.646345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:00.894 [2024-11-21 01:51:44.646351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:00.894 [2024-11-21 01:51:44.646357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:00.894 [2024-11-21 01:51:44.646363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:26:00.894 [2024-11-21 01:51:44.646370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:00.894 [2024-11-21 01:51:44.646398] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:00.894 [2024-11-21 01:51:44.646408] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:05.099 [2024-11-21 01:51:48.726919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.099 [2024-11-21 01:51:48.726991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:05.099 [2024-11-21 01:51:48.727008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4080.504 ms 00:26:05.099 [2024-11-21 01:51:48.727019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.099 [2024-11-21 01:51:48.758760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.099 [2024-11-21 01:51:48.758813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:05.099 [2024-11-21 01:51:48.758828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.493 ms 00:26:05.099 [2024-11-21 01:51:48.758839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.099 [2024-11-21 01:51:48.758979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.099 [2024-11-21 01:51:48.758992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:05.099 [2024-11-21 01:51:48.759002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:26:05.099 [2024-11-21 01:51:48.759014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.099 [2024-11-21 01:51:48.794725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.099 [2024-11-21 01:51:48.794773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:05.099 [2024-11-21 01:51:48.794786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.657 ms 00:26:05.099 [2024-11-21 01:51:48.794797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.099 [2024-11-21 01:51:48.794831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.099 [2024-11-21 01:51:48.794846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:05.099 [2024-11-21 01:51:48.794855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:05.099 [2024-11-21 01:51:48.794865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.099 [2024-11-21 01:51:48.795456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.099 [2024-11-21 01:51:48.795482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:05.099 [2024-11-21 01:51:48.795494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:26:05.099 [2024-11-21 01:51:48.795505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.099 [2024-11-21 01:51:48.795641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.099 [2024-11-21 01:51:48.795655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:05.099 [2024-11-21 01:51:48.795666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:26:05.099 [2024-11-21 01:51:48.795680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.099 [2024-11-21 01:51:48.812986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.099 [2024-11-21 01:51:48.813029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:05.099 [2024-11-21 01:51:48.813040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.285 ms 00:26:05.099 [2024-11-21 01:51:48.813050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.099 [2024-11-21 01:51:48.826499] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:05.099 [2024-11-21 01:51:48.830266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.099 [2024-11-21 01:51:48.830301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:05.099 [2024-11-21 01:51:48.830314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.126 ms 00:26:05.099 [2024-11-21 01:51:48.830322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.099 [2024-11-21 01:51:48.939227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.099 [2024-11-21 01:51:48.939286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:05.099 [2024-11-21 01:51:48.939306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.867 ms 00:26:05.099 [2024-11-21 01:51:48.939315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.099 [2024-11-21 01:51:48.939508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.099 [2024-11-21 01:51:48.939523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:05.099 [2024-11-21 01:51:48.939538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:26:05.099 [2024-11-21 01:51:48.939547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.099 [2024-11-21 01:51:48.965507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.099 [2024-11-21 01:51:48.965550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:05.099 [2024-11-21 01:51:48.965567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.901 ms 00:26:05.099 [2024-11-21 01:51:48.965576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.099 [2024-11-21 01:51:48.990375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.099 [2024-11-21 01:51:48.990414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:05.099 [2024-11-21 01:51:48.990430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.760 ms 00:26:05.099 [2024-11-21 01:51:48.990439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.099 [2024-11-21 01:51:48.991039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.099 [2024-11-21 01:51:48.991051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:05.099 [2024-11-21 01:51:48.991063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:26:05.099 [2024-11-21 01:51:48.991071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.359 [2024-11-21 01:51:49.076637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.359 [2024-11-21 01:51:49.076686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:05.359 [2024-11-21 01:51:49.076706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.519 ms 00:26:05.359 [2024-11-21 01:51:49.076715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.359 [2024-11-21 01:51:49.104077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.359 [2024-11-21 01:51:49.104122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:05.359 [2024-11-21 01:51:49.104137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.266 ms 00:26:05.359 [2024-11-21 01:51:49.104145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.360 [2024-11-21 01:51:49.129449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.360 [2024-11-21 01:51:49.129489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:05.360 [2024-11-21 01:51:49.129504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.252 ms 00:26:05.360 [2024-11-21 01:51:49.129511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.360 [2024-11-21 01:51:49.155817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.360 [2024-11-21 01:51:49.155860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:05.360 [2024-11-21 01:51:49.155874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.255 ms 00:26:05.360 [2024-11-21 01:51:49.155882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.360 [2024-11-21 01:51:49.155935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.360 [2024-11-21 01:51:49.155945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:05.360 [2024-11-21 01:51:49.155960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:05.360 [2024-11-21 01:51:49.155968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.360 [2024-11-21 01:51:49.156064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.360 [2024-11-21 01:51:49.156075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:05.360 [2024-11-21 01:51:49.156089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:05.360 [2024-11-21 01:51:49.156097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.360 [2024-11-21 01:51:49.157246] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4520.794 ms, result 0 00:26:05.360 { 00:26:05.360 "name": "ftl0", 00:26:05.360 "uuid": "5b817a20-1913-404f-8c83-c65c9c250ee0" 00:26:05.360 } 00:26:05.360 01:51:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:26:05.360 01:51:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:05.621 01:51:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:26:05.621 01:51:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:26:05.621 01:51:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:26:05.881 /dev/nbd0 00:26:05.881 01:51:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:26:05.881 01:51:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:05.881 01:51:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:26:05.881 01:51:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:05.881 01:51:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:05.881 01:51:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:05.881 01:51:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:26:05.881 01:51:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:05.881 01:51:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:05.881 01:51:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:26:05.881 1+0 records in 00:26:05.881 1+0 records out 00:26:05.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353375 s, 11.6 MB/s 00:26:05.881 01:51:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:05.881 01:51:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:26:05.881 01:51:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:05.881 01:51:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:05.881 01:51:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:26:05.881 01:51:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:26:05.881 [2024-11-21 01:51:49.719236] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:26:05.881 [2024-11-21 01:51:49.719372] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80402 ] 00:26:06.142 [2024-11-21 01:51:49.883261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.142 [2024-11-21 01:51:50.029794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.527  [2024-11-21T01:51:52.419Z] Copying: 186/1024 [MB] (186 MBps) [2024-11-21T01:51:53.355Z] Copying: 419/1024 [MB] (233 MBps) [2024-11-21T01:51:54.729Z] Copying: 676/1024 [MB] (256 MBps) [2024-11-21T01:51:54.729Z] Copying: 927/1024 [MB] (250 MBps) [2024-11-21T01:51:55.663Z] Copying: 1024/1024 [MB] (average 232 MBps) 00:26:11.706 00:26:11.706 01:51:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:13.616 01:51:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:13.616 [2024-11-21 01:51:57.455029] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:26:13.616 [2024-11-21 01:51:57.455116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80485 ] 00:26:13.875 [2024-11-21 01:51:57.604430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.875 [2024-11-21 01:51:57.689589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.249  [2024-11-21T01:52:00.140Z] Copying: 34/1024 [MB] (34 MBps) [2024-11-21T01:52:01.074Z] Copying: 59/1024 [MB] (25 MBps) [2024-11-21T01:52:02.008Z] Copying: 75/1024 [MB] (15 MBps) [2024-11-21T01:52:02.960Z] Copying: 97/1024 [MB] (21 MBps) [2024-11-21T01:52:03.982Z] Copying: 109/1024 [MB] (11 MBps) [2024-11-21T01:52:04.917Z] Copying: 119/1024 [MB] (10 MBps) [2024-11-21T01:52:06.291Z] Copying: 153/1024 [MB] (33 MBps) [2024-11-21T01:52:07.225Z] Copying: 172/1024 [MB] (18 MBps) [2024-11-21T01:52:08.159Z] Copying: 195/1024 [MB] (23 MBps) [2024-11-21T01:52:09.092Z] Copying: 226/1024 [MB] (31 MBps) [2024-11-21T01:52:10.025Z] Copying: 245/1024 [MB] (18 MBps) [2024-11-21T01:52:10.959Z] Copying: 265/1024 [MB] (20 MBps) [2024-11-21T01:52:11.893Z] Copying: 297/1024 [MB] (31 MBps) [2024-11-21T01:52:13.264Z] Copying: 317/1024 [MB] (19 MBps) [2024-11-21T01:52:14.197Z] Copying: 335/1024 [MB] (18 MBps) [2024-11-21T01:52:15.130Z] Copying: 351/1024 [MB] (15 MBps) [2024-11-21T01:52:16.062Z] Copying: 368/1024 [MB] (16 MBps) [2024-11-21T01:52:17.039Z] Copying: 386/1024 [MB] (18 MBps) [2024-11-21T01:52:17.972Z] Copying: 404/1024 [MB] (17 MBps) [2024-11-21T01:52:18.906Z] Copying: 420/1024 [MB] (16 MBps) [2024-11-21T01:52:20.284Z] Copying: 451/1024 [MB] (31 MBps) [2024-11-21T01:52:21.218Z] Copying: 469/1024 [MB] (17 MBps) [2024-11-21T01:52:22.151Z] Copying: 484/1024 [MB] (14 MBps) [2024-11-21T01:52:23.084Z] Copying: 500/1024 [MB] (16 MBps) [2024-11-21T01:52:24.018Z] Copying: 529/1024 [MB] (28 MBps) [2024-11-21T01:52:24.951Z] Copying: 563/1024 [MB] (33 MBps) [2024-11-21T01:52:25.883Z] Copying: 591/1024 [MB] (28 MBps) [2024-11-21T01:52:27.256Z] Copying: 605/1024 [MB] (14 MBps) [2024-11-21T01:52:28.189Z] Copying: 619/1024 [MB] (14 MBps) [2024-11-21T01:52:29.123Z] Copying: 654/1024 [MB] (34 MBps) [2024-11-21T01:52:30.054Z] Copying: 675/1024 [MB] (21 MBps) [2024-11-21T01:52:30.987Z] Copying: 688/1024 [MB] (13 MBps) [2024-11-21T01:52:31.921Z] Copying: 717/1024 [MB] (28 MBps) [2024-11-21T01:52:33.295Z] Copying: 752/1024 [MB] (34 MBps) [2024-11-21T01:52:33.964Z] Copying: 774/1024 [MB] (22 MBps) [2024-11-21T01:52:34.895Z] Copying: 793/1024 [MB] (18 MBps) [2024-11-21T01:52:36.269Z] Copying: 816/1024 [MB] (22 MBps) [2024-11-21T01:52:37.204Z] Copying: 850/1024 [MB] (34 MBps) [2024-11-21T01:52:38.139Z] Copying: 883/1024 [MB] (33 MBps) [2024-11-21T01:52:39.073Z] Copying: 901/1024 [MB] (18 MBps) [2024-11-21T01:52:40.008Z] Copying: 927/1024 [MB] (25 MBps) [2024-11-21T01:52:40.944Z] Copying: 943/1024 [MB] (16 MBps) [2024-11-21T01:52:42.319Z] Copying: 958/1024 [MB] (14 MBps) [2024-11-21T01:52:42.886Z] Copying: 977/1024 [MB] (19 MBps) [2024-11-21T01:52:44.262Z] Copying: 1000/1024 [MB] (22 MBps) [2024-11-21T01:52:44.262Z] Copying: 1023/1024 [MB] (23 MBps) [2024-11-21T01:52:44.831Z] Copying: 1024/1024 [MB] (average 22 MBps) 00:27:00.874 00:27:00.874 01:52:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:00.874 01:52:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:00.874 01:52:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:01.137 [2024-11-21 01:52:44.942577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.137 [2024-11-21 01:52:44.942627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:01.137 [2024-11-21 01:52:44.942638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:01.137 [2024-11-21 01:52:44.942646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.137 [2024-11-21 01:52:44.942664] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:01.137 [2024-11-21 01:52:44.944720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.137 [2024-11-21 01:52:44.944744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:01.137 [2024-11-21 01:52:44.944754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.040 ms 00:27:01.137 [2024-11-21 01:52:44.944760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.137 [2024-11-21 01:52:44.949821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.137 [2024-11-21 01:52:44.949850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:01.137 [2024-11-21 01:52:44.949860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.035 ms 00:27:01.137 [2024-11-21 01:52:44.949866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.137 [2024-11-21 01:52:44.963542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.137 [2024-11-21 01:52:44.963569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:01.137 [2024-11-21 01:52:44.963579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.660 ms 00:27:01.137 [2024-11-21 01:52:44.963585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.137 [2024-11-21 01:52:44.968419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.137 [2024-11-21 01:52:44.968440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:01.137 [2024-11-21 01:52:44.968450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.805 ms 00:27:01.137 [2024-11-21 01:52:44.968457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.137 [2024-11-21 01:52:44.986770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.137 [2024-11-21 01:52:44.986796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:01.137 [2024-11-21 01:52:44.986806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.260 ms 00:27:01.137 [2024-11-21 01:52:44.986812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.137 [2024-11-21 01:52:44.999508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.137 [2024-11-21 01:52:44.999533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:01.137 [2024-11-21 01:52:44.999545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.661 ms 00:27:01.137 [2024-11-21 01:52:44.999553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.137 [2024-11-21 01:52:44.999668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.137 [2024-11-21 01:52:44.999676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:01.137 [2024-11-21 01:52:44.999685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:27:01.137 [2024-11-21 01:52:44.999691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.137 [2024-11-21 01:52:45.017480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.137 [2024-11-21 01:52:45.017503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:01.137 [2024-11-21 01:52:45.017512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.773 ms 00:27:01.137 [2024-11-21 01:52:45.017518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.137 [2024-11-21 01:52:45.034994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.137 [2024-11-21 01:52:45.035016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:01.137 [2024-11-21 01:52:45.035025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.439 ms 00:27:01.137 [2024-11-21 01:52:45.035031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.137 [2024-11-21 01:52:45.052050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.137 [2024-11-21 01:52:45.052072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:01.137 [2024-11-21 01:52:45.052081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.987 ms 00:27:01.137 [2024-11-21 01:52:45.052086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.137 [2024-11-21 01:52:45.069159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.137 [2024-11-21 01:52:45.069182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:01.137 [2024-11-21 01:52:45.069191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.014 ms 00:27:01.137 [2024-11-21 01:52:45.069196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.137 [2024-11-21 01:52:45.069225] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:01.137 [2024-11-21 01:52:45.069235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:01.137 [2024-11-21 01:52:45.069244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:01.137 [2024-11-21 01:52:45.069250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:01.137 [2024-11-21 01:52:45.069257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:01.137 [2024-11-21 01:52:45.069263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:01.137 [2024-11-21 01:52:45.069270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:01.137 [2024-11-21 01:52:45.069275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:01.137 [2024-11-21 01:52:45.069284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:01.137 [2024-11-21 01:52:45.069290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:01.138 [2024-11-21 01:52:45.069922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:01.139 [2024-11-21 01:52:45.069930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:01.139 [2024-11-21 01:52:45.069942] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:01.139 [2024-11-21 01:52:45.069949] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b817a20-1913-404f-8c83-c65c9c250ee0 00:27:01.139 [2024-11-21 01:52:45.069956] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:01.139 [2024-11-21 01:52:45.069964] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:01.139 [2024-11-21 01:52:45.069969] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:01.139 [2024-11-21 01:52:45.069977] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:01.139 [2024-11-21 01:52:45.069982] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:01.139 [2024-11-21 01:52:45.069989] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:01.139 [2024-11-21 01:52:45.069994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:01.139 [2024-11-21 01:52:45.070001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:01.139 [2024-11-21 01:52:45.070006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:01.139 [2024-11-21 01:52:45.070012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.139 [2024-11-21 01:52:45.070018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:01.139 [2024-11-21 01:52:45.070026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:27:01.139 [2024-11-21 01:52:45.070031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.139 [2024-11-21 01:52:45.079381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.139 [2024-11-21 01:52:45.079403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:01.139 [2024-11-21 01:52:45.079414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.323 ms 00:27:01.139 [2024-11-21 01:52:45.079420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.139 [2024-11-21 01:52:45.079704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:01.139 [2024-11-21 01:52:45.079711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:01.139 [2024-11-21 01:52:45.079719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:27:01.139 [2024-11-21 01:52:45.079725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.400 [2024-11-21 01:52:45.112471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.400 [2024-11-21 01:52:45.112497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:01.400 [2024-11-21 01:52:45.112507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.400 [2024-11-21 01:52:45.112513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.400 [2024-11-21 01:52:45.112557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.400 [2024-11-21 01:52:45.112563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:01.400 [2024-11-21 01:52:45.112570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.400 [2024-11-21 01:52:45.112576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.400 [2024-11-21 01:52:45.112660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.400 [2024-11-21 01:52:45.112671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:01.400 [2024-11-21 01:52:45.112680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.400 [2024-11-21 01:52:45.112685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.400 [2024-11-21 01:52:45.112701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.400 [2024-11-21 01:52:45.112706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:01.400 [2024-11-21 01:52:45.112713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.400 [2024-11-21 01:52:45.112719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.400 [2024-11-21 01:52:45.171367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.400 [2024-11-21 01:52:45.171397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:01.400 [2024-11-21 01:52:45.171407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.400 [2024-11-21 01:52:45.171412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.400 [2024-11-21 01:52:45.219075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.400 [2024-11-21 01:52:45.219102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:01.400 [2024-11-21 01:52:45.219111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.400 [2024-11-21 01:52:45.219118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.400 [2024-11-21 01:52:45.219170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.400 [2024-11-21 01:52:45.219178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:01.400 [2024-11-21 01:52:45.219186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.400 [2024-11-21 01:52:45.219194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.400 [2024-11-21 01:52:45.219242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.400 [2024-11-21 01:52:45.219250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:01.400 [2024-11-21 01:52:45.219257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.400 [2024-11-21 01:52:45.219263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.400 [2024-11-21 01:52:45.219331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.400 [2024-11-21 01:52:45.219338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:01.400 [2024-11-21 01:52:45.219345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.400 [2024-11-21 01:52:45.219350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.400 [2024-11-21 01:52:45.219377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.400 [2024-11-21 01:52:45.219384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:01.400 [2024-11-21 01:52:45.219391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.400 [2024-11-21 01:52:45.219398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.400 [2024-11-21 01:52:45.219428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.400 [2024-11-21 01:52:45.219434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:01.400 [2024-11-21 01:52:45.219442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.400 [2024-11-21 01:52:45.219447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.400 [2024-11-21 01:52:45.219486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:01.400 [2024-11-21 01:52:45.219499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:01.400 [2024-11-21 01:52:45.219507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:01.400 [2024-11-21 01:52:45.219513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:01.400 [2024-11-21 01:52:45.219623] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 277.008 ms, result 0 00:27:01.400 true 00:27:01.400 01:52:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 80255 00:27:01.400 01:52:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid80255 00:27:01.400 01:52:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:01.400 [2024-11-21 01:52:45.303165] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:27:01.400 [2024-11-21 01:52:45.303282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80987 ] 00:27:01.661 [2024-11-21 01:52:45.459280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.661 [2024-11-21 01:52:45.535127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.044  [2024-11-21T01:52:47.944Z] Copying: 256/1024 [MB] (256 MBps) [2024-11-21T01:52:48.888Z] Copying: 515/1024 [MB] (258 MBps) [2024-11-21T01:52:49.831Z] Copying: 774/1024 [MB] (258 MBps) [2024-11-21T01:52:50.404Z] Copying: 1024/1024 [MB] (average 258 MBps) 00:27:06.447 00:27:06.447 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 80255 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:27:06.447 01:52:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:06.448 [2024-11-21 01:52:50.299875] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:27:06.448 [2024-11-21 01:52:50.299991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81045 ] 00:27:06.709 [2024-11-21 01:52:50.456063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.709 [2024-11-21 01:52:50.536750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.970 [2024-11-21 01:52:50.744651] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:06.970 [2024-11-21 01:52:50.744698] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:06.970 [2024-11-21 01:52:50.807539] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:06.970 [2024-11-21 01:52:50.807819] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:06.970 [2024-11-21 01:52:50.808058] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:07.233 [2024-11-21 01:52:51.030190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.233 [2024-11-21 01:52:51.030219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:07.233 [2024-11-21 01:52:51.030229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:07.233 [2024-11-21 01:52:51.030235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.233 [2024-11-21 01:52:51.030271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.233 [2024-11-21 01:52:51.030279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:07.233 [2024-11-21 01:52:51.030285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:07.233 [2024-11-21 01:52:51.030291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.233 [2024-11-21 01:52:51.030303] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:07.233 [2024-11-21 01:52:51.030811] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:07.233 [2024-11-21 01:52:51.030823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.233 [2024-11-21 01:52:51.030829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:07.233 [2024-11-21 01:52:51.030835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:27:07.233 [2024-11-21 01:52:51.030841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.233 [2024-11-21 01:52:51.031777] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:07.233 [2024-11-21 01:52:51.041246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.233 [2024-11-21 01:52:51.041274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:07.233 [2024-11-21 01:52:51.041282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.470 ms 00:27:07.233 [2024-11-21 01:52:51.041287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.233 [2024-11-21 01:52:51.041330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.233 [2024-11-21 01:52:51.041337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:07.233 [2024-11-21 01:52:51.041343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:07.233 [2024-11-21 01:52:51.041349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.233 [2024-11-21 01:52:51.045847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.233 [2024-11-21 01:52:51.045868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:07.233 [2024-11-21 01:52:51.045875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.455 ms 00:27:07.233 [2024-11-21 01:52:51.045881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.233 [2024-11-21 01:52:51.045933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.233 [2024-11-21 01:52:51.045939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:07.233 [2024-11-21 01:52:51.045945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:07.233 [2024-11-21 01:52:51.045951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.233 [2024-11-21 01:52:51.045981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.233 [2024-11-21 01:52:51.045990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:07.233 [2024-11-21 01:52:51.045996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:07.233 [2024-11-21 01:52:51.046002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.233 [2024-11-21 01:52:51.046016] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:07.233 [2024-11-21 01:52:51.048573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.233 [2024-11-21 01:52:51.048593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:07.233 [2024-11-21 01:52:51.048600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.561 ms 00:27:07.233 [2024-11-21 01:52:51.048605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.233 [2024-11-21 01:52:51.048638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.233 [2024-11-21 01:52:51.048646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:07.233 [2024-11-21 01:52:51.048652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:07.233 [2024-11-21 01:52:51.048658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.233 [2024-11-21 01:52:51.048671] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:07.233 [2024-11-21 01:52:51.048688] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:07.233 [2024-11-21 01:52:51.048713] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:07.233 [2024-11-21 01:52:51.048725] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:07.233 [2024-11-21 01:52:51.048802] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:07.233 [2024-11-21 01:52:51.048809] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:07.233 [2024-11-21 01:52:51.048817] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:07.233 [2024-11-21 01:52:51.048825] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:07.233 [2024-11-21 01:52:51.048833] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:07.233 [2024-11-21 01:52:51.048839] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:07.233 [2024-11-21 01:52:51.048845] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:07.233 [2024-11-21 01:52:51.048851] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:07.233 [2024-11-21 01:52:51.048856] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:07.233 [2024-11-21 01:52:51.048862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.233 [2024-11-21 01:52:51.048868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:07.233 [2024-11-21 01:52:51.048874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:27:07.233 [2024-11-21 01:52:51.048879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.233 [2024-11-21 01:52:51.048946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.233 [2024-11-21 01:52:51.048954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:07.233 [2024-11-21 01:52:51.048959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:07.233 [2024-11-21 01:52:51.048964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.234 [2024-11-21 01:52:51.049038] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:07.234 [2024-11-21 01:52:51.049046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:07.234 [2024-11-21 01:52:51.049052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:07.234 [2024-11-21 01:52:51.049057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.234 [2024-11-21 01:52:51.049063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:07.234 [2024-11-21 01:52:51.049068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:07.234 [2024-11-21 01:52:51.049073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:07.234 [2024-11-21 01:52:51.049079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:07.234 [2024-11-21 01:52:51.049084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:07.234 [2024-11-21 01:52:51.049089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:07.234 [2024-11-21 01:52:51.049094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:07.234 [2024-11-21 01:52:51.049103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:07.234 [2024-11-21 01:52:51.049109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:07.234 [2024-11-21 01:52:51.049115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:07.234 [2024-11-21 01:52:51.049120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:07.234 [2024-11-21 01:52:51.049125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.234 [2024-11-21 01:52:51.049130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:07.234 [2024-11-21 01:52:51.049135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:07.234 [2024-11-21 01:52:51.049139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.234 [2024-11-21 01:52:51.049145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:07.234 [2024-11-21 01:52:51.049150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:07.234 [2024-11-21 01:52:51.049155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:07.234 [2024-11-21 01:52:51.049160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:07.234 [2024-11-21 01:52:51.049165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:07.234 [2024-11-21 01:52:51.049169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:07.234 [2024-11-21 01:52:51.049175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:07.234 [2024-11-21 01:52:51.049179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:07.234 [2024-11-21 01:52:51.049185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:07.234 [2024-11-21 01:52:51.049189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:07.234 [2024-11-21 01:52:51.049195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:07.234 [2024-11-21 01:52:51.049200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:07.234 [2024-11-21 01:52:51.049204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:07.234 [2024-11-21 01:52:51.049209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:07.234 [2024-11-21 01:52:51.049215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:07.234 [2024-11-21 01:52:51.049220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:07.234 [2024-11-21 01:52:51.049225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:07.234 [2024-11-21 01:52:51.049230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:07.234 [2024-11-21 01:52:51.049235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:07.234 [2024-11-21 01:52:51.049240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:07.234 [2024-11-21 01:52:51.049244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.234 [2024-11-21 01:52:51.049249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:07.234 [2024-11-21 01:52:51.049254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:07.234 [2024-11-21 01:52:51.049260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.234 [2024-11-21 01:52:51.049265] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:07.234 [2024-11-21 01:52:51.049271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:07.234 [2024-11-21 01:52:51.049276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:07.234 [2024-11-21 01:52:51.049283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:07.234 [2024-11-21 01:52:51.049288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:07.234 [2024-11-21 01:52:51.049294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:07.234 [2024-11-21 01:52:51.049299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:07.234 [2024-11-21 01:52:51.049304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:07.234 [2024-11-21 01:52:51.049308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:07.234 [2024-11-21 01:52:51.049313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:07.234 [2024-11-21 01:52:51.049320] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:07.234 [2024-11-21 01:52:51.049326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:07.234 [2024-11-21 01:52:51.049332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:07.234 [2024-11-21 01:52:51.049338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:07.234 [2024-11-21 01:52:51.049343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:07.234 [2024-11-21 01:52:51.049348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:07.234 [2024-11-21 01:52:51.049354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:07.234 [2024-11-21 01:52:51.049359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:07.234 [2024-11-21 01:52:51.049364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:07.234 [2024-11-21 01:52:51.049370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:07.234 [2024-11-21 01:52:51.049375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:07.234 [2024-11-21 01:52:51.049380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:07.234 [2024-11-21 01:52:51.049386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:07.234 [2024-11-21 01:52:51.049391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:07.234 [2024-11-21 01:52:51.049396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:07.234 [2024-11-21 01:52:51.049401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:07.234 [2024-11-21 01:52:51.049407] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:07.234 [2024-11-21 01:52:51.049413] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:07.234 [2024-11-21 01:52:51.049419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:07.234 [2024-11-21 01:52:51.049425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:07.234 [2024-11-21 01:52:51.049430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:07.234 [2024-11-21 01:52:51.049435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:07.234 [2024-11-21 01:52:51.049441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.234 [2024-11-21 01:52:51.049446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:07.234 [2024-11-21 01:52:51.049452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:27:07.234 [2024-11-21 01:52:51.049457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.234 [2024-11-21 01:52:51.070377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.234 [2024-11-21 01:52:51.070402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:07.234 [2024-11-21 01:52:51.070410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.880 ms 00:27:07.234 [2024-11-21 01:52:51.070415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.234 [2024-11-21 01:52:51.070476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.234 [2024-11-21 01:52:51.070485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:07.234 [2024-11-21 01:52:51.070491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:27:07.234 [2024-11-21 01:52:51.070497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.234 [2024-11-21 01:52:51.110474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.234 [2024-11-21 01:52:51.110509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:07.234 [2024-11-21 01:52:51.110519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.935 ms 00:27:07.234 [2024-11-21 01:52:51.110528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.234 [2024-11-21 01:52:51.110569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.234 [2024-11-21 01:52:51.110576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:07.234 [2024-11-21 01:52:51.110583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:07.234 [2024-11-21 01:52:51.110589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.234 [2024-11-21 01:52:51.110935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.234 [2024-11-21 01:52:51.110950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:07.234 [2024-11-21 01:52:51.110959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:27:07.234 [2024-11-21 01:52:51.110964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.234 [2024-11-21 01:52:51.111067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.235 [2024-11-21 01:52:51.111073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:07.235 [2024-11-21 01:52:51.111080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:27:07.235 [2024-11-21 01:52:51.111086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.235 [2024-11-21 01:52:51.121518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.235 [2024-11-21 01:52:51.121541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:07.235 [2024-11-21 01:52:51.121549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.415 ms 00:27:07.235 [2024-11-21 01:52:51.121554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.235 [2024-11-21 01:52:51.131285] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:07.235 [2024-11-21 01:52:51.131304] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:07.235 [2024-11-21 01:52:51.131313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.235 [2024-11-21 01:52:51.131319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:07.235 [2024-11-21 01:52:51.131326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.671 ms 00:27:07.235 [2024-11-21 01:52:51.131332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.235 [2024-11-21 01:52:51.149924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.235 [2024-11-21 01:52:51.149950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:07.235 [2024-11-21 01:52:51.149965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.558 ms 00:27:07.235 [2024-11-21 01:52:51.149972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.235 [2024-11-21 01:52:51.158921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.235 [2024-11-21 01:52:51.158941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:07.235 [2024-11-21 01:52:51.158949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.928 ms 00:27:07.235 [2024-11-21 01:52:51.158954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.235 [2024-11-21 01:52:51.167361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.235 [2024-11-21 01:52:51.167383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:07.235 [2024-11-21 01:52:51.167390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.379 ms 00:27:07.235 [2024-11-21 01:52:51.167396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.235 [2024-11-21 01:52:51.167882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.235 [2024-11-21 01:52:51.167893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:07.235 [2024-11-21 01:52:51.167900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:27:07.235 [2024-11-21 01:52:51.167906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.497 [2024-11-21 01:52:51.212197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.497 [2024-11-21 01:52:51.212237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:07.497 [2024-11-21 01:52:51.212248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.278 ms 00:27:07.497 [2024-11-21 01:52:51.212255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.497 [2024-11-21 01:52:51.220420] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:07.497 [2024-11-21 01:52:51.222508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.497 [2024-11-21 01:52:51.222531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:07.497 [2024-11-21 01:52:51.222541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.206 ms 00:27:07.497 [2024-11-21 01:52:51.222548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.497 [2024-11-21 01:52:51.222633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.497 [2024-11-21 01:52:51.222642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:07.497 [2024-11-21 01:52:51.222649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:07.497 [2024-11-21 01:52:51.222655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.497 [2024-11-21 01:52:51.222707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.497 [2024-11-21 01:52:51.222718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:07.497 [2024-11-21 01:52:51.222724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:27:07.497 [2024-11-21 01:52:51.222731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.497 [2024-11-21 01:52:51.222745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.497 [2024-11-21 01:52:51.222754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:07.497 [2024-11-21 01:52:51.222760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:07.497 [2024-11-21 01:52:51.222766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.497 [2024-11-21 01:52:51.222790] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:07.497 [2024-11-21 01:52:51.222799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.497 [2024-11-21 01:52:51.222805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:07.497 [2024-11-21 01:52:51.222811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:07.497 [2024-11-21 01:52:51.222817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.497 [2024-11-21 01:52:51.240708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.497 [2024-11-21 01:52:51.240735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:07.497 [2024-11-21 01:52:51.240744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.875 ms 00:27:07.497 [2024-11-21 01:52:51.240751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.497 [2024-11-21 01:52:51.240815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.497 [2024-11-21 01:52:51.240823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:07.497 [2024-11-21 01:52:51.240829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:07.497 [2024-11-21 01:52:51.240836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.497 [2024-11-21 01:52:51.241599] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 211.052 ms, result 0 00:27:08.441  [2024-11-21T01:52:53.343Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-21T01:52:54.285Z] Copying: 40/1024 [MB] (10 MBps) [2024-11-21T01:52:55.673Z] Copying: 64/1024 [MB] (24 MBps) [2024-11-21T01:52:56.616Z] Copying: 75/1024 [MB] (10 MBps) [2024-11-21T01:52:57.561Z] Copying: 87476/1048576 [kB] (10236 kBps) [2024-11-21T01:52:58.506Z] Copying: 95/1024 [MB] (10 MBps) [2024-11-21T01:52:59.449Z] Copying: 123/1024 [MB] (27 MBps) [2024-11-21T01:53:00.397Z] Copying: 160/1024 [MB] (37 MBps) [2024-11-21T01:53:01.340Z] Copying: 177/1024 [MB] (16 MBps) [2024-11-21T01:53:02.284Z] Copying: 193/1024 [MB] (16 MBps) [2024-11-21T01:53:03.670Z] Copying: 236/1024 [MB] (43 MBps) [2024-11-21T01:53:04.647Z] Copying: 264/1024 [MB] (27 MBps) [2024-11-21T01:53:05.324Z] Copying: 291/1024 [MB] (27 MBps) [2024-11-21T01:53:06.269Z] Copying: 306/1024 [MB] (15 MBps) [2024-11-21T01:53:07.655Z] Copying: 329/1024 [MB] (22 MBps) [2024-11-21T01:53:08.599Z] Copying: 348/1024 [MB] (18 MBps) [2024-11-21T01:53:09.543Z] Copying: 387/1024 [MB] (38 MBps) [2024-11-21T01:53:10.485Z] Copying: 433/1024 [MB] (45 MBps) [2024-11-21T01:53:11.428Z] Copying: 463/1024 [MB] (30 MBps) [2024-11-21T01:53:12.372Z] Copying: 502/1024 [MB] (38 MBps) [2024-11-21T01:53:13.314Z] Copying: 529/1024 [MB] (27 MBps) [2024-11-21T01:53:14.258Z] Copying: 546/1024 [MB] (16 MBps) [2024-11-21T01:53:15.644Z] Copying: 559/1024 [MB] (13 MBps) [2024-11-21T01:53:16.584Z] Copying: 569/1024 [MB] (10 MBps) [2024-11-21T01:53:17.523Z] Copying: 582/1024 [MB] (12 MBps) [2024-11-21T01:53:18.467Z] Copying: 614/1024 [MB] (31 MBps) [2024-11-21T01:53:19.407Z] Copying: 640/1024 [MB] (26 MBps) [2024-11-21T01:53:20.349Z] Copying: 663/1024 [MB] (23 MBps) [2024-11-21T01:53:21.292Z] Copying: 675/1024 [MB] (12 MBps) [2024-11-21T01:53:22.677Z] Copying: 687/1024 [MB] (11 MBps) [2024-11-21T01:53:23.621Z] Copying: 699/1024 [MB] (12 MBps) [2024-11-21T01:53:24.565Z] Copying: 711/1024 [MB] (11 MBps) [2024-11-21T01:53:25.508Z] Copying: 736/1024 [MB] (24 MBps) [2024-11-21T01:53:26.452Z] Copying: 755/1024 [MB] (19 MBps) [2024-11-21T01:53:27.396Z] Copying: 772/1024 [MB] (16 MBps) [2024-11-21T01:53:28.340Z] Copying: 802/1024 [MB] (29 MBps) [2024-11-21T01:53:29.282Z] Copying: 833/1024 [MB] (30 MBps) [2024-11-21T01:53:30.667Z] Copying: 861/1024 [MB] (28 MBps) [2024-11-21T01:53:31.610Z] Copying: 890/1024 [MB] (28 MBps) [2024-11-21T01:53:32.553Z] Copying: 911/1024 [MB] (21 MBps) [2024-11-21T01:53:33.509Z] Copying: 937/1024 [MB] (25 MBps) [2024-11-21T01:53:34.459Z] Copying: 954/1024 [MB] (17 MBps) [2024-11-21T01:53:35.401Z] Copying: 973/1024 [MB] (18 MBps) [2024-11-21T01:53:36.347Z] Copying: 996/1024 [MB] (23 MBps) [2024-11-21T01:53:37.347Z] Copying: 1023/1024 [MB] (26 MBps) [2024-11-21T01:53:37.347Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-21 01:53:37.059821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.390 [2024-11-21 01:53:37.060048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:53.390 [2024-11-21 01:53:37.060131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:53.390 [2024-11-21 01:53:37.060158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.390 [2024-11-21 01:53:37.065116] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:53.390 [2024-11-21 01:53:37.069151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.390 [2024-11-21 01:53:37.069314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:53.390 [2024-11-21 01:53:37.069430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.860 ms 00:27:53.390 [2024-11-21 01:53:37.069456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.391 [2024-11-21 01:53:37.079286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.391 [2024-11-21 01:53:37.079443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:53.391 [2024-11-21 01:53:37.079510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.433 ms 00:27:53.391 [2024-11-21 01:53:37.079536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.391 [2024-11-21 01:53:37.102819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.391 [2024-11-21 01:53:37.102976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:53.391 [2024-11-21 01:53:37.103042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.248 ms 00:27:53.391 [2024-11-21 01:53:37.103066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.391 [2024-11-21 01:53:37.109230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.391 [2024-11-21 01:53:37.109384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:53.391 [2024-11-21 01:53:37.109441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.119 ms 00:27:53.391 [2024-11-21 01:53:37.109464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.391 [2024-11-21 01:53:37.135825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.391 [2024-11-21 01:53:37.135996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:53.391 [2024-11-21 01:53:37.136057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.287 ms 00:27:53.391 [2024-11-21 01:53:37.136081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.391 [2024-11-21 01:53:37.151942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.391 [2024-11-21 01:53:37.152102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:53.391 [2024-11-21 01:53:37.152163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.791 ms 00:27:53.391 [2024-11-21 01:53:37.152187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.391 [2024-11-21 01:53:37.308118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.391 [2024-11-21 01:53:37.308277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:53.391 [2024-11-21 01:53:37.308334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 155.874 ms 00:27:53.391 [2024-11-21 01:53:37.308364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.391 [2024-11-21 01:53:37.334381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.391 [2024-11-21 01:53:37.334538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:53.391 [2024-11-21 01:53:37.334593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.983 ms 00:27:53.391 [2024-11-21 01:53:37.334635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.653 [2024-11-21 01:53:37.360043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.653 [2024-11-21 01:53:37.360208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:53.654 [2024-11-21 01:53:37.360225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.325 ms 00:27:53.654 [2024-11-21 01:53:37.360234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.654 [2024-11-21 01:53:37.385214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.654 [2024-11-21 01:53:37.385261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:53.654 [2024-11-21 01:53:37.385273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.940 ms 00:27:53.654 [2024-11-21 01:53:37.385280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.654 [2024-11-21 01:53:37.409711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.654 [2024-11-21 01:53:37.409758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:53.654 [2024-11-21 01:53:37.409770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.361 ms 00:27:53.654 [2024-11-21 01:53:37.409778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.654 [2024-11-21 01:53:37.409821] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:53.654 [2024-11-21 01:53:37.409836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 95488 / 261120 wr_cnt: 1 state: open 00:27:53.654 [2024-11-21 01:53:37.409847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.409856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.409864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.409872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.409879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.409887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.409896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.409903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.409912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.409919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.409928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.409936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.409943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.409950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.409958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.409966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.409973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.409981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.409988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.409996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:53.654 [2024-11-21 01:53:37.410449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:53.655 [2024-11-21 01:53:37.410638] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:53.655 [2024-11-21 01:53:37.410647] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b817a20-1913-404f-8c83-c65c9c250ee0 00:27:53.655 [2024-11-21 01:53:37.410656] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 95488 00:27:53.655 [2024-11-21 01:53:37.410671] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 96448 00:27:53.655 [2024-11-21 01:53:37.410686] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 95488 00:27:53.655 [2024-11-21 01:53:37.410696] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0101 00:27:53.655 [2024-11-21 01:53:37.410705] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:53.655 [2024-11-21 01:53:37.410714] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:53.655 [2024-11-21 01:53:37.410733] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:53.655 [2024-11-21 01:53:37.410740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:53.655 [2024-11-21 01:53:37.410748] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:53.655 [2024-11-21 01:53:37.410756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.655 [2024-11-21 01:53:37.410764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:53.655 [2024-11-21 01:53:37.410773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:27:53.655 [2024-11-21 01:53:37.410781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.655 [2024-11-21 01:53:37.424144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.655 [2024-11-21 01:53:37.424189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:53.655 [2024-11-21 01:53:37.424201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.344 ms 00:27:53.655 [2024-11-21 01:53:37.424209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.655 [2024-11-21 01:53:37.424630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:53.655 [2024-11-21 01:53:37.424649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:53.655 [2024-11-21 01:53:37.424660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:27:53.655 [2024-11-21 01:53:37.424668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.655 [2024-11-21 01:53:37.461094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.655 [2024-11-21 01:53:37.461144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:53.655 [2024-11-21 01:53:37.461155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.655 [2024-11-21 01:53:37.461163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.655 [2024-11-21 01:53:37.461228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.655 [2024-11-21 01:53:37.461237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:53.655 [2024-11-21 01:53:37.461246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.655 [2024-11-21 01:53:37.461253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.655 [2024-11-21 01:53:37.461347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.655 [2024-11-21 01:53:37.461360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:53.655 [2024-11-21 01:53:37.461369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.655 [2024-11-21 01:53:37.461377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.655 [2024-11-21 01:53:37.461393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.655 [2024-11-21 01:53:37.461401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:53.655 [2024-11-21 01:53:37.461410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.655 [2024-11-21 01:53:37.461418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.655 [2024-11-21 01:53:37.544629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.655 [2024-11-21 01:53:37.544697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:53.655 [2024-11-21 01:53:37.544711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.655 [2024-11-21 01:53:37.544719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.916 [2024-11-21 01:53:37.612403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.916 [2024-11-21 01:53:37.612461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:53.916 [2024-11-21 01:53:37.612473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.916 [2024-11-21 01:53:37.612482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.916 [2024-11-21 01:53:37.612548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.916 [2024-11-21 01:53:37.612558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:53.916 [2024-11-21 01:53:37.612567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.916 [2024-11-21 01:53:37.612575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.916 [2024-11-21 01:53:37.612656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.916 [2024-11-21 01:53:37.612669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:53.916 [2024-11-21 01:53:37.612677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.916 [2024-11-21 01:53:37.612686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.916 [2024-11-21 01:53:37.612783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.916 [2024-11-21 01:53:37.612799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:53.916 [2024-11-21 01:53:37.612808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.916 [2024-11-21 01:53:37.612817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.916 [2024-11-21 01:53:37.612846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.916 [2024-11-21 01:53:37.612855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:53.916 [2024-11-21 01:53:37.612864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.916 [2024-11-21 01:53:37.612872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.916 [2024-11-21 01:53:37.612910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.916 [2024-11-21 01:53:37.612925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:53.916 [2024-11-21 01:53:37.612933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.916 [2024-11-21 01:53:37.612942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.916 [2024-11-21 01:53:37.612988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:53.916 [2024-11-21 01:53:37.613001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:53.916 [2024-11-21 01:53:37.613009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:53.916 [2024-11-21 01:53:37.613018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:53.916 [2024-11-21 01:53:37.613155] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 553.291 ms, result 0 00:27:54.860 00:27:54.860 00:27:54.860 01:53:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:57.409 01:53:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:57.410 [2024-11-21 01:53:40.859471] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:27:57.410 [2024-11-21 01:53:40.859602] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81559 ] 00:27:57.410 [2024-11-21 01:53:41.021502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.410 [2024-11-21 01:53:41.139060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.673 [2024-11-21 01:53:41.427699] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:57.673 [2024-11-21 01:53:41.427786] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:57.673 [2024-11-21 01:53:41.590412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.673 [2024-11-21 01:53:41.590475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:57.673 [2024-11-21 01:53:41.590496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:57.673 [2024-11-21 01:53:41.590505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.673 [2024-11-21 01:53:41.590561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.673 [2024-11-21 01:53:41.590573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:57.673 [2024-11-21 01:53:41.590586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:57.673 [2024-11-21 01:53:41.590594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.673 [2024-11-21 01:53:41.590633] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:57.673 [2024-11-21 01:53:41.591333] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:57.673 [2024-11-21 01:53:41.591352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.673 [2024-11-21 01:53:41.591360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:57.673 [2024-11-21 01:53:41.591370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:27:57.673 [2024-11-21 01:53:41.591379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.673 [2024-11-21 01:53:41.593116] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:57.673 [2024-11-21 01:53:41.607459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.673 [2024-11-21 01:53:41.607511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:57.673 [2024-11-21 01:53:41.607526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.346 ms 00:27:57.673 [2024-11-21 01:53:41.607536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.673 [2024-11-21 01:53:41.607641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.673 [2024-11-21 01:53:41.607653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:57.673 [2024-11-21 01:53:41.607664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:57.673 [2024-11-21 01:53:41.607671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.673 [2024-11-21 01:53:41.616946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.673 [2024-11-21 01:53:41.616993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:57.673 [2024-11-21 01:53:41.617004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.198 ms 00:27:57.673 [2024-11-21 01:53:41.617014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.673 [2024-11-21 01:53:41.617107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.673 [2024-11-21 01:53:41.617117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:57.673 [2024-11-21 01:53:41.617126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:27:57.673 [2024-11-21 01:53:41.617134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.673 [2024-11-21 01:53:41.617180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.673 [2024-11-21 01:53:41.617191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:57.673 [2024-11-21 01:53:41.617200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:57.673 [2024-11-21 01:53:41.617207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.673 [2024-11-21 01:53:41.617231] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:57.673 [2024-11-21 01:53:41.621438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.673 [2024-11-21 01:53:41.621480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:57.673 [2024-11-21 01:53:41.621492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.213 ms 00:27:57.673 [2024-11-21 01:53:41.621503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.673 [2024-11-21 01:53:41.621538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.673 [2024-11-21 01:53:41.621548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:57.673 [2024-11-21 01:53:41.621556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:57.673 [2024-11-21 01:53:41.621565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.673 [2024-11-21 01:53:41.621629] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:57.673 [2024-11-21 01:53:41.621653] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:57.673 [2024-11-21 01:53:41.621689] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:57.673 [2024-11-21 01:53:41.621723] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:57.673 [2024-11-21 01:53:41.621830] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:57.673 [2024-11-21 01:53:41.621842] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:57.673 [2024-11-21 01:53:41.621853] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:57.673 [2024-11-21 01:53:41.621865] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:57.673 [2024-11-21 01:53:41.621875] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:57.673 [2024-11-21 01:53:41.621885] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:57.673 [2024-11-21 01:53:41.621893] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:57.673 [2024-11-21 01:53:41.621901] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:57.673 [2024-11-21 01:53:41.621909] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:57.673 [2024-11-21 01:53:41.621922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.673 [2024-11-21 01:53:41.621930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:57.673 [2024-11-21 01:53:41.621939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:27:57.673 [2024-11-21 01:53:41.621947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.673 [2024-11-21 01:53:41.622034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.673 [2024-11-21 01:53:41.622043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:57.673 [2024-11-21 01:53:41.622052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:57.673 [2024-11-21 01:53:41.622060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.673 [2024-11-21 01:53:41.622161] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:57.673 [2024-11-21 01:53:41.622175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:57.673 [2024-11-21 01:53:41.622184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:57.673 [2024-11-21 01:53:41.622192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.673 [2024-11-21 01:53:41.622201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:57.673 [2024-11-21 01:53:41.622208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:57.673 [2024-11-21 01:53:41.622215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:57.673 [2024-11-21 01:53:41.622222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:57.673 [2024-11-21 01:53:41.622228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:57.673 [2024-11-21 01:53:41.622235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:57.673 [2024-11-21 01:53:41.622242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:57.673 [2024-11-21 01:53:41.622249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:57.673 [2024-11-21 01:53:41.622255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:57.673 [2024-11-21 01:53:41.622262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:57.673 [2024-11-21 01:53:41.622269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:57.673 [2024-11-21 01:53:41.622282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.673 [2024-11-21 01:53:41.622289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:57.673 [2024-11-21 01:53:41.622295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:57.673 [2024-11-21 01:53:41.622302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.673 [2024-11-21 01:53:41.622308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:57.673 [2024-11-21 01:53:41.622315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:57.673 [2024-11-21 01:53:41.622321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:57.673 [2024-11-21 01:53:41.622328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:57.673 [2024-11-21 01:53:41.622334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:57.673 [2024-11-21 01:53:41.622341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:57.673 [2024-11-21 01:53:41.622347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:57.673 [2024-11-21 01:53:41.622354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:57.674 [2024-11-21 01:53:41.622362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:57.674 [2024-11-21 01:53:41.622369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:57.674 [2024-11-21 01:53:41.622376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:57.674 [2024-11-21 01:53:41.622383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:57.674 [2024-11-21 01:53:41.622390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:57.674 [2024-11-21 01:53:41.622397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:57.674 [2024-11-21 01:53:41.622403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:57.674 [2024-11-21 01:53:41.622410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:57.674 [2024-11-21 01:53:41.622417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:57.674 [2024-11-21 01:53:41.622424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:57.674 [2024-11-21 01:53:41.622431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:57.674 [2024-11-21 01:53:41.622437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:57.674 [2024-11-21 01:53:41.622443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.674 [2024-11-21 01:53:41.622451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:57.674 [2024-11-21 01:53:41.622458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:57.674 [2024-11-21 01:53:41.622465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.674 [2024-11-21 01:53:41.622471] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:57.674 [2024-11-21 01:53:41.622479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:57.674 [2024-11-21 01:53:41.622487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:57.674 [2024-11-21 01:53:41.622495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.674 [2024-11-21 01:53:41.622503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:57.674 [2024-11-21 01:53:41.622511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:57.674 [2024-11-21 01:53:41.622517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:57.674 [2024-11-21 01:53:41.622525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:57.674 [2024-11-21 01:53:41.622531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:57.674 [2024-11-21 01:53:41.622537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:57.674 [2024-11-21 01:53:41.622547] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:57.674 [2024-11-21 01:53:41.622557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:57.674 [2024-11-21 01:53:41.622565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:57.674 [2024-11-21 01:53:41.622572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:57.674 [2024-11-21 01:53:41.622579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:57.674 [2024-11-21 01:53:41.622586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:57.674 [2024-11-21 01:53:41.622594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:57.674 [2024-11-21 01:53:41.622601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:57.674 [2024-11-21 01:53:41.622608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:57.674 [2024-11-21 01:53:41.622633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:57.674 [2024-11-21 01:53:41.622640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:57.674 [2024-11-21 01:53:41.622647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:57.674 [2024-11-21 01:53:41.622655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:57.674 [2024-11-21 01:53:41.622664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:57.674 [2024-11-21 01:53:41.622672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:57.674 [2024-11-21 01:53:41.622679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:57.674 [2024-11-21 01:53:41.622687] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:57.674 [2024-11-21 01:53:41.622698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:57.674 [2024-11-21 01:53:41.622707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:57.674 [2024-11-21 01:53:41.622714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:57.674 [2024-11-21 01:53:41.622722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:57.674 [2024-11-21 01:53:41.622729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:57.674 [2024-11-21 01:53:41.622737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.674 [2024-11-21 01:53:41.622745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:57.674 [2024-11-21 01:53:41.622753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.644 ms 00:27:57.674 [2024-11-21 01:53:41.622760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.937 [2024-11-21 01:53:41.654930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.937 [2024-11-21 01:53:41.654979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:57.937 [2024-11-21 01:53:41.654992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.123 ms 00:27:57.937 [2024-11-21 01:53:41.655002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.937 [2024-11-21 01:53:41.655092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.937 [2024-11-21 01:53:41.655101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:57.937 [2024-11-21 01:53:41.655110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:57.937 [2024-11-21 01:53:41.655118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.937 [2024-11-21 01:53:41.704219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.937 [2024-11-21 01:53:41.704276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:57.937 [2024-11-21 01:53:41.704290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.040 ms 00:27:57.937 [2024-11-21 01:53:41.704299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.937 [2024-11-21 01:53:41.704353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.937 [2024-11-21 01:53:41.704364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:57.937 [2024-11-21 01:53:41.704373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:57.937 [2024-11-21 01:53:41.704385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.937 [2024-11-21 01:53:41.705032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.937 [2024-11-21 01:53:41.705057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:57.937 [2024-11-21 01:53:41.705068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:27:57.937 [2024-11-21 01:53:41.705077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.937 [2024-11-21 01:53:41.705233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.937 [2024-11-21 01:53:41.705244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:57.937 [2024-11-21 01:53:41.705253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:27:57.937 [2024-11-21 01:53:41.705268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.937 [2024-11-21 01:53:41.721307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.937 [2024-11-21 01:53:41.721354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:57.937 [2024-11-21 01:53:41.721369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.018 ms 00:27:57.937 [2024-11-21 01:53:41.721377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.937 [2024-11-21 01:53:41.735800] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:57.937 [2024-11-21 01:53:41.735851] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:57.937 [2024-11-21 01:53:41.735865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.937 [2024-11-21 01:53:41.735873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:57.937 [2024-11-21 01:53:41.735884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.374 ms 00:27:57.937 [2024-11-21 01:53:41.735893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.937 [2024-11-21 01:53:41.761472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.937 [2024-11-21 01:53:41.761525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:57.937 [2024-11-21 01:53:41.761538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.524 ms 00:27:57.937 [2024-11-21 01:53:41.761546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.938 [2024-11-21 01:53:41.774580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.938 [2024-11-21 01:53:41.774644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:57.938 [2024-11-21 01:53:41.774656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.971 ms 00:27:57.938 [2024-11-21 01:53:41.774664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.938 [2024-11-21 01:53:41.787344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.938 [2024-11-21 01:53:41.787389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:57.938 [2024-11-21 01:53:41.787401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.628 ms 00:27:57.938 [2024-11-21 01:53:41.787409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.938 [2024-11-21 01:53:41.788090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.938 [2024-11-21 01:53:41.788114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:57.938 [2024-11-21 01:53:41.788125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:27:57.938 [2024-11-21 01:53:41.788136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.938 [2024-11-21 01:53:41.855021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.938 [2024-11-21 01:53:41.855088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:57.938 [2024-11-21 01:53:41.855113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.862 ms 00:27:57.938 [2024-11-21 01:53:41.855122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.938 [2024-11-21 01:53:41.866580] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:57.938 [2024-11-21 01:53:41.869743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.938 [2024-11-21 01:53:41.869783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:57.938 [2024-11-21 01:53:41.869798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.561 ms 00:27:57.938 [2024-11-21 01:53:41.869807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.938 [2024-11-21 01:53:41.869900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.938 [2024-11-21 01:53:41.869912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:57.938 [2024-11-21 01:53:41.869922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:57.938 [2024-11-21 01:53:41.869934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.938 [2024-11-21 01:53:41.871574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.938 [2024-11-21 01:53:41.871644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:57.938 [2024-11-21 01:53:41.871657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.601 ms 00:27:57.938 [2024-11-21 01:53:41.871665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.938 [2024-11-21 01:53:41.871696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.938 [2024-11-21 01:53:41.871705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:57.938 [2024-11-21 01:53:41.871715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:57.938 [2024-11-21 01:53:41.871724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.938 [2024-11-21 01:53:41.871768] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:57.938 [2024-11-21 01:53:41.871783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.938 [2024-11-21 01:53:41.871792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:57.938 [2024-11-21 01:53:41.871800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:57.938 [2024-11-21 01:53:41.871810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.199 [2024-11-21 01:53:41.897334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.199 [2024-11-21 01:53:41.897386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:58.199 [2024-11-21 01:53:41.897400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.503 ms 00:27:58.199 [2024-11-21 01:53:41.897414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.199 [2024-11-21 01:53:41.897504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.199 [2024-11-21 01:53:41.897515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:58.199 [2024-11-21 01:53:41.897526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:58.199 [2024-11-21 01:53:41.897534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.199 [2024-11-21 01:53:41.898837] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.894 ms, result 0 00:27:59.143  [2024-11-21T01:53:44.488Z] Copying: 1016/1048576 [kB] (1016 kBps) [2024-11-21T01:53:45.431Z] Copying: 4044/1048576 [kB] (3028 kBps) [2024-11-21T01:53:46.375Z] Copying: 16/1024 [MB] (12 MBps) [2024-11-21T01:53:47.314Z] Copying: 37/1024 [MB] (21 MBps) [2024-11-21T01:53:48.257Z] Copying: 63/1024 [MB] (25 MBps) [2024-11-21T01:53:49.202Z] Copying: 86/1024 [MB] (22 MBps) [2024-11-21T01:53:50.145Z] Copying: 106/1024 [MB] (20 MBps) [2024-11-21T01:53:51.089Z] Copying: 123/1024 [MB] (16 MBps) [2024-11-21T01:53:52.476Z] Copying: 144/1024 [MB] (21 MBps) [2024-11-21T01:53:53.420Z] Copying: 172/1024 [MB] (27 MBps) [2024-11-21T01:53:54.363Z] Copying: 198/1024 [MB] (26 MBps) [2024-11-21T01:53:55.306Z] Copying: 216/1024 [MB] (18 MBps) [2024-11-21T01:53:56.250Z] Copying: 242/1024 [MB] (25 MBps) [2024-11-21T01:53:57.194Z] Copying: 267/1024 [MB] (24 MBps) [2024-11-21T01:53:58.138Z] Copying: 284/1024 [MB] (17 MBps) [2024-11-21T01:53:59.524Z] Copying: 315/1024 [MB] (31 MBps) [2024-11-21T01:54:00.097Z] Copying: 356/1024 [MB] (40 MBps) [2024-11-21T01:54:01.484Z] Copying: 375/1024 [MB] (19 MBps) [2024-11-21T01:54:02.428Z] Copying: 400/1024 [MB] (25 MBps) [2024-11-21T01:54:03.372Z] Copying: 424/1024 [MB] (23 MBps) [2024-11-21T01:54:04.317Z] Copying: 451/1024 [MB] (26 MBps) [2024-11-21T01:54:05.261Z] Copying: 472/1024 [MB] (21 MBps) [2024-11-21T01:54:06.203Z] Copying: 488/1024 [MB] (15 MBps) [2024-11-21T01:54:07.144Z] Copying: 503/1024 [MB] (15 MBps) [2024-11-21T01:54:08.090Z] Copying: 522/1024 [MB] (18 MBps) [2024-11-21T01:54:09.104Z] Copying: 543/1024 [MB] (20 MBps) [2024-11-21T01:54:10.493Z] Copying: 574/1024 [MB] (31 MBps) [2024-11-21T01:54:11.437Z] Copying: 590/1024 [MB] (15 MBps) [2024-11-21T01:54:12.381Z] Copying: 605/1024 [MB] (15 MBps) [2024-11-21T01:54:13.325Z] Copying: 621/1024 [MB] (15 MBps) [2024-11-21T01:54:14.266Z] Copying: 647/1024 [MB] (26 MBps) [2024-11-21T01:54:15.209Z] Copying: 663/1024 [MB] (15 MBps) [2024-11-21T01:54:16.153Z] Copying: 682/1024 [MB] (19 MBps) [2024-11-21T01:54:17.096Z] Copying: 710/1024 [MB] (28 MBps) [2024-11-21T01:54:18.483Z] Copying: 748/1024 [MB] (38 MBps) [2024-11-21T01:54:19.426Z] Copying: 774/1024 [MB] (25 MBps) [2024-11-21T01:54:20.367Z] Copying: 800/1024 [MB] (26 MBps) [2024-11-21T01:54:21.308Z] Copying: 826/1024 [MB] (26 MBps) [2024-11-21T01:54:22.253Z] Copying: 851/1024 [MB] (25 MBps) [2024-11-21T01:54:23.197Z] Copying: 867/1024 [MB] (16 MBps) [2024-11-21T01:54:24.142Z] Copying: 893/1024 [MB] (25 MBps) [2024-11-21T01:54:25.086Z] Copying: 910/1024 [MB] (17 MBps) [2024-11-21T01:54:26.475Z] Copying: 929/1024 [MB] (18 MBps) [2024-11-21T01:54:27.418Z] Copying: 960/1024 [MB] (30 MBps) [2024-11-21T01:54:28.359Z] Copying: 976/1024 [MB] (16 MBps) [2024-11-21T01:54:29.303Z] Copying: 998/1024 [MB] (22 MBps) [2024-11-21T01:54:29.566Z] Copying: 1018/1024 [MB] (20 MBps) [2024-11-21T01:54:29.827Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-21 01:54:29.728209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.870 [2024-11-21 01:54:29.728281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:45.870 [2024-11-21 01:54:29.728306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:45.870 [2024-11-21 01:54:29.728320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.870 [2024-11-21 01:54:29.728353] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:45.870 [2024-11-21 01:54:29.732719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.870 [2024-11-21 01:54:29.732760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:45.870 [2024-11-21 01:54:29.732775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.345 ms 00:28:45.870 [2024-11-21 01:54:29.732787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.870 [2024-11-21 01:54:29.733123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.870 [2024-11-21 01:54:29.733145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:45.870 [2024-11-21 01:54:29.733162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:28:45.870 [2024-11-21 01:54:29.733174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.870 [2024-11-21 01:54:29.745330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.870 [2024-11-21 01:54:29.745367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:45.870 [2024-11-21 01:54:29.745377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.135 ms 00:28:45.870 [2024-11-21 01:54:29.745385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.870 [2024-11-21 01:54:29.750513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.870 [2024-11-21 01:54:29.750542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:45.870 [2024-11-21 01:54:29.750550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.102 ms 00:28:45.870 [2024-11-21 01:54:29.750560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.870 [2024-11-21 01:54:29.769162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.870 [2024-11-21 01:54:29.769191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:45.870 [2024-11-21 01:54:29.769200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.563 ms 00:28:45.870 [2024-11-21 01:54:29.769207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.870 [2024-11-21 01:54:29.780476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.870 [2024-11-21 01:54:29.780504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:45.870 [2024-11-21 01:54:29.780513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.241 ms 00:28:45.870 [2024-11-21 01:54:29.780520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.870 [2024-11-21 01:54:29.782857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.870 [2024-11-21 01:54:29.782887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:45.870 [2024-11-21 01:54:29.782894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.306 ms 00:28:45.870 [2024-11-21 01:54:29.782901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.870 [2024-11-21 01:54:29.801031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.870 [2024-11-21 01:54:29.801056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:45.870 [2024-11-21 01:54:29.801064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.115 ms 00:28:45.870 [2024-11-21 01:54:29.801069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:45.870 [2024-11-21 01:54:29.819187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:45.870 [2024-11-21 01:54:29.819214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:45.870 [2024-11-21 01:54:29.819228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.091 ms 00:28:45.870 [2024-11-21 01:54:29.819234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.132 [2024-11-21 01:54:29.836226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.132 [2024-11-21 01:54:29.836253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:46.132 [2024-11-21 01:54:29.836260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.966 ms 00:28:46.132 [2024-11-21 01:54:29.836266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.132 [2024-11-21 01:54:29.853590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.132 [2024-11-21 01:54:29.853623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:46.132 [2024-11-21 01:54:29.853630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.282 ms 00:28:46.132 [2024-11-21 01:54:29.853636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.132 [2024-11-21 01:54:29.853661] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:46.132 [2024-11-21 01:54:29.853671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:46.132 [2024-11-21 01:54:29.853679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:46.132 [2024-11-21 01:54:29.853686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:46.132 [2024-11-21 01:54:29.853945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.853951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.853957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.853962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.853968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.853973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.853979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.853984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.853990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.853995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:46.133 [2024-11-21 01:54:29.854253] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:46.133 [2024-11-21 01:54:29.854274] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b817a20-1913-404f-8c83-c65c9c250ee0 00:28:46.133 [2024-11-21 01:54:29.854281] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:46.133 [2024-11-21 01:54:29.854287] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 169152 00:28:46.133 [2024-11-21 01:54:29.854293] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 167168 00:28:46.133 [2024-11-21 01:54:29.854304] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0119 00:28:46.133 [2024-11-21 01:54:29.854309] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:46.133 [2024-11-21 01:54:29.854315] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:46.133 [2024-11-21 01:54:29.854321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:46.133 [2024-11-21 01:54:29.854330] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:46.133 [2024-11-21 01:54:29.854335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:46.133 [2024-11-21 01:54:29.854341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.133 [2024-11-21 01:54:29.854347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:46.133 [2024-11-21 01:54:29.854353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:28:46.133 [2024-11-21 01:54:29.854358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.133 [2024-11-21 01:54:29.864064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.133 [2024-11-21 01:54:29.864092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:46.133 [2024-11-21 01:54:29.864100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.693 ms 00:28:46.133 [2024-11-21 01:54:29.864106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.133 [2024-11-21 01:54:29.864370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:46.133 [2024-11-21 01:54:29.864377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:46.133 [2024-11-21 01:54:29.864383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:28:46.133 [2024-11-21 01:54:29.864389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.133 [2024-11-21 01:54:29.890241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.134 [2024-11-21 01:54:29.890280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:46.134 [2024-11-21 01:54:29.890287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.134 [2024-11-21 01:54:29.890294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.134 [2024-11-21 01:54:29.890332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.134 [2024-11-21 01:54:29.890338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:46.134 [2024-11-21 01:54:29.890344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.134 [2024-11-21 01:54:29.890350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.134 [2024-11-21 01:54:29.890391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.134 [2024-11-21 01:54:29.890402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:46.134 [2024-11-21 01:54:29.890408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.134 [2024-11-21 01:54:29.890414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.134 [2024-11-21 01:54:29.890425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.134 [2024-11-21 01:54:29.890431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:46.134 [2024-11-21 01:54:29.890437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.134 [2024-11-21 01:54:29.890442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.134 [2024-11-21 01:54:29.949473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.134 [2024-11-21 01:54:29.949505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:46.134 [2024-11-21 01:54:29.949514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.134 [2024-11-21 01:54:29.949520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.134 [2024-11-21 01:54:29.997222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.134 [2024-11-21 01:54:29.997255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:46.134 [2024-11-21 01:54:29.997263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.134 [2024-11-21 01:54:29.997269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.134 [2024-11-21 01:54:29.997322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.134 [2024-11-21 01:54:29.997329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:46.134 [2024-11-21 01:54:29.997337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.134 [2024-11-21 01:54:29.997343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.134 [2024-11-21 01:54:29.997369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.134 [2024-11-21 01:54:29.997375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:46.134 [2024-11-21 01:54:29.997381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.134 [2024-11-21 01:54:29.997387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.134 [2024-11-21 01:54:29.997453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.134 [2024-11-21 01:54:29.997460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:46.134 [2024-11-21 01:54:29.997466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.134 [2024-11-21 01:54:29.997474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.134 [2024-11-21 01:54:29.997496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.134 [2024-11-21 01:54:29.997502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:46.134 [2024-11-21 01:54:29.997508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.134 [2024-11-21 01:54:29.997514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.134 [2024-11-21 01:54:29.997541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.134 [2024-11-21 01:54:29.997548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:46.134 [2024-11-21 01:54:29.997554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.134 [2024-11-21 01:54:29.997561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.134 [2024-11-21 01:54:29.997591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:46.134 [2024-11-21 01:54:29.997597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:46.134 [2024-11-21 01:54:29.997603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:46.134 [2024-11-21 01:54:29.997609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:46.134 [2024-11-21 01:54:29.997713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 269.491 ms, result 0 00:28:46.706 00:28:46.706 00:28:46.706 01:54:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:49.248 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:49.248 01:54:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:49.248 [2024-11-21 01:54:32.686884] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:28:49.248 [2024-11-21 01:54:32.687185] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82084 ] 00:28:49.248 [2024-11-21 01:54:32.850896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.248 [2024-11-21 01:54:32.967963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.511 [2024-11-21 01:54:33.258119] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:49.511 [2024-11-21 01:54:33.258198] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:49.511 [2024-11-21 01:54:33.421598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.511 [2024-11-21 01:54:33.421663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:49.511 [2024-11-21 01:54:33.421683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:49.511 [2024-11-21 01:54:33.421693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.511 [2024-11-21 01:54:33.421747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.511 [2024-11-21 01:54:33.421761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:49.511 [2024-11-21 01:54:33.421773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:49.511 [2024-11-21 01:54:33.421782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.511 [2024-11-21 01:54:33.421802] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:49.511 [2024-11-21 01:54:33.422534] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:49.511 [2024-11-21 01:54:33.422577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.511 [2024-11-21 01:54:33.422588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:49.511 [2024-11-21 01:54:33.422599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.780 ms 00:28:49.511 [2024-11-21 01:54:33.422607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.511 [2024-11-21 01:54:33.424411] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:49.511 [2024-11-21 01:54:33.437667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.511 [2024-11-21 01:54:33.437696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:49.511 [2024-11-21 01:54:33.437708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.259 ms 00:28:49.511 [2024-11-21 01:54:33.437717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.511 [2024-11-21 01:54:33.437772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.511 [2024-11-21 01:54:33.437781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:49.511 [2024-11-21 01:54:33.437789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:49.511 [2024-11-21 01:54:33.437796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.511 [2024-11-21 01:54:33.442792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.511 [2024-11-21 01:54:33.442822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:49.511 [2024-11-21 01:54:33.442832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.949 ms 00:28:49.511 [2024-11-21 01:54:33.442839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.511 [2024-11-21 01:54:33.442908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.511 [2024-11-21 01:54:33.442917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:49.511 [2024-11-21 01:54:33.442925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:28:49.511 [2024-11-21 01:54:33.442932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.511 [2024-11-21 01:54:33.442978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.511 [2024-11-21 01:54:33.442987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:49.511 [2024-11-21 01:54:33.442995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:49.511 [2024-11-21 01:54:33.443003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.511 [2024-11-21 01:54:33.443022] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:49.511 [2024-11-21 01:54:33.446221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.511 [2024-11-21 01:54:33.446247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:49.511 [2024-11-21 01:54:33.446256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.203 ms 00:28:49.511 [2024-11-21 01:54:33.446266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.511 [2024-11-21 01:54:33.446310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.511 [2024-11-21 01:54:33.446319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:49.511 [2024-11-21 01:54:33.446328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:49.511 [2024-11-21 01:54:33.446335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.511 [2024-11-21 01:54:33.446354] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:49.511 [2024-11-21 01:54:33.446372] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:49.511 [2024-11-21 01:54:33.446406] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:49.511 [2024-11-21 01:54:33.446424] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:49.511 [2024-11-21 01:54:33.446526] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:49.511 [2024-11-21 01:54:33.446543] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:49.511 [2024-11-21 01:54:33.446554] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:49.511 [2024-11-21 01:54:33.446564] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:49.511 [2024-11-21 01:54:33.446573] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:49.511 [2024-11-21 01:54:33.446581] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:49.511 [2024-11-21 01:54:33.446589] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:49.511 [2024-11-21 01:54:33.446596] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:49.511 [2024-11-21 01:54:33.446604] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:49.511 [2024-11-21 01:54:33.446626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.511 [2024-11-21 01:54:33.446634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:49.511 [2024-11-21 01:54:33.446642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:28:49.511 [2024-11-21 01:54:33.446649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.511 [2024-11-21 01:54:33.446733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.511 [2024-11-21 01:54:33.446747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:49.511 [2024-11-21 01:54:33.446755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:49.511 [2024-11-21 01:54:33.446761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.511 [2024-11-21 01:54:33.446862] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:49.511 [2024-11-21 01:54:33.446875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:49.511 [2024-11-21 01:54:33.446883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:49.511 [2024-11-21 01:54:33.446891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:49.511 [2024-11-21 01:54:33.446899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:49.511 [2024-11-21 01:54:33.446906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:49.511 [2024-11-21 01:54:33.446913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:49.511 [2024-11-21 01:54:33.446919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:49.511 [2024-11-21 01:54:33.446926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:49.511 [2024-11-21 01:54:33.446932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:49.511 [2024-11-21 01:54:33.446939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:49.511 [2024-11-21 01:54:33.446946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:49.511 [2024-11-21 01:54:33.446953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:49.511 [2024-11-21 01:54:33.446960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:49.511 [2024-11-21 01:54:33.446966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:49.511 [2024-11-21 01:54:33.446977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:49.511 [2024-11-21 01:54:33.446984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:49.511 [2024-11-21 01:54:33.446991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:49.511 [2024-11-21 01:54:33.446997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:49.511 [2024-11-21 01:54:33.447004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:49.512 [2024-11-21 01:54:33.447010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:49.512 [2024-11-21 01:54:33.447017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:49.512 [2024-11-21 01:54:33.447023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:49.512 [2024-11-21 01:54:33.447029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:49.512 [2024-11-21 01:54:33.447036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:49.512 [2024-11-21 01:54:33.447042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:49.512 [2024-11-21 01:54:33.447048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:49.512 [2024-11-21 01:54:33.447054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:49.512 [2024-11-21 01:54:33.447060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:49.512 [2024-11-21 01:54:33.447067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:49.512 [2024-11-21 01:54:33.447073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:49.512 [2024-11-21 01:54:33.447079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:49.512 [2024-11-21 01:54:33.447086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:49.512 [2024-11-21 01:54:33.447093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:49.512 [2024-11-21 01:54:33.447099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:49.512 [2024-11-21 01:54:33.447105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:49.512 [2024-11-21 01:54:33.447112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:49.512 [2024-11-21 01:54:33.447119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:49.512 [2024-11-21 01:54:33.447125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:49.512 [2024-11-21 01:54:33.447132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:49.512 [2024-11-21 01:54:33.447138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:49.512 [2024-11-21 01:54:33.447144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:49.512 [2024-11-21 01:54:33.447150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:49.512 [2024-11-21 01:54:33.447158] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:49.512 [2024-11-21 01:54:33.447166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:49.512 [2024-11-21 01:54:33.447173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:49.512 [2024-11-21 01:54:33.447180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:49.512 [2024-11-21 01:54:33.447187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:49.512 [2024-11-21 01:54:33.447194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:49.512 [2024-11-21 01:54:33.447200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:49.512 [2024-11-21 01:54:33.447210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:49.512 [2024-11-21 01:54:33.447216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:49.512 [2024-11-21 01:54:33.447222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:49.512 [2024-11-21 01:54:33.447230] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:49.512 [2024-11-21 01:54:33.447239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:49.512 [2024-11-21 01:54:33.447247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:49.512 [2024-11-21 01:54:33.447254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:49.512 [2024-11-21 01:54:33.447260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:49.512 [2024-11-21 01:54:33.447267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:49.512 [2024-11-21 01:54:33.447274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:49.512 [2024-11-21 01:54:33.447282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:49.512 [2024-11-21 01:54:33.447289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:49.512 [2024-11-21 01:54:33.447295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:49.512 [2024-11-21 01:54:33.447302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:49.512 [2024-11-21 01:54:33.447309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:49.512 [2024-11-21 01:54:33.447316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:49.512 [2024-11-21 01:54:33.447322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:49.512 [2024-11-21 01:54:33.447329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:49.512 [2024-11-21 01:54:33.447336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:49.512 [2024-11-21 01:54:33.447343] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:49.512 [2024-11-21 01:54:33.447353] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:49.512 [2024-11-21 01:54:33.447361] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:49.512 [2024-11-21 01:54:33.447368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:49.512 [2024-11-21 01:54:33.447375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:49.512 [2024-11-21 01:54:33.447382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:49.512 [2024-11-21 01:54:33.447390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.512 [2024-11-21 01:54:33.447397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:49.512 [2024-11-21 01:54:33.447405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:28:49.512 [2024-11-21 01:54:33.447412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.774 [2024-11-21 01:54:33.473360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.774 [2024-11-21 01:54:33.473393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:49.774 [2024-11-21 01:54:33.473403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.897 ms 00:28:49.774 [2024-11-21 01:54:33.473414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.774 [2024-11-21 01:54:33.473491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.774 [2024-11-21 01:54:33.473499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:49.774 [2024-11-21 01:54:33.473507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:49.774 [2024-11-21 01:54:33.473514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.774 [2024-11-21 01:54:33.511401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.774 [2024-11-21 01:54:33.511440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:49.774 [2024-11-21 01:54:33.511452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.837 ms 00:28:49.774 [2024-11-21 01:54:33.511460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.774 [2024-11-21 01:54:33.511498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.774 [2024-11-21 01:54:33.511508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:49.774 [2024-11-21 01:54:33.511519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:49.774 [2024-11-21 01:54:33.511526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.774 [2024-11-21 01:54:33.511898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.774 [2024-11-21 01:54:33.511926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:49.774 [2024-11-21 01:54:33.511936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:28:49.774 [2024-11-21 01:54:33.511943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.774 [2024-11-21 01:54:33.512066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.775 [2024-11-21 01:54:33.512075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:49.775 [2024-11-21 01:54:33.512083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:28:49.775 [2024-11-21 01:54:33.512096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.775 [2024-11-21 01:54:33.525366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.775 [2024-11-21 01:54:33.525397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:49.775 [2024-11-21 01:54:33.525410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.251 ms 00:28:49.775 [2024-11-21 01:54:33.525418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.775 [2024-11-21 01:54:33.538584] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:49.775 [2024-11-21 01:54:33.538627] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:49.775 [2024-11-21 01:54:33.538639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.775 [2024-11-21 01:54:33.538647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:49.775 [2024-11-21 01:54:33.538656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.117 ms 00:28:49.775 [2024-11-21 01:54:33.538663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.775 [2024-11-21 01:54:33.563586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.775 [2024-11-21 01:54:33.563630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:49.775 [2024-11-21 01:54:33.563641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.886 ms 00:28:49.775 [2024-11-21 01:54:33.563648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.775 [2024-11-21 01:54:33.575505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.775 [2024-11-21 01:54:33.575539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:49.775 [2024-11-21 01:54:33.575549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.817 ms 00:28:49.775 [2024-11-21 01:54:33.575556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.775 [2024-11-21 01:54:33.587205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.775 [2024-11-21 01:54:33.587239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:49.775 [2024-11-21 01:54:33.587248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.615 ms 00:28:49.775 [2024-11-21 01:54:33.587255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.775 [2024-11-21 01:54:33.587866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.775 [2024-11-21 01:54:33.587891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:49.775 [2024-11-21 01:54:33.587904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:28:49.775 [2024-11-21 01:54:33.587911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.775 [2024-11-21 01:54:33.646271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.775 [2024-11-21 01:54:33.646328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:49.775 [2024-11-21 01:54:33.646347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.343 ms 00:28:49.775 [2024-11-21 01:54:33.646356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.775 [2024-11-21 01:54:33.656785] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:49.775 [2024-11-21 01:54:33.659284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.775 [2024-11-21 01:54:33.659319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:49.775 [2024-11-21 01:54:33.659330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.884 ms 00:28:49.775 [2024-11-21 01:54:33.659338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.775 [2024-11-21 01:54:33.659425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.775 [2024-11-21 01:54:33.659436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:49.775 [2024-11-21 01:54:33.659446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:49.775 [2024-11-21 01:54:33.659457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.775 [2024-11-21 01:54:33.660144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.775 [2024-11-21 01:54:33.660180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:49.775 [2024-11-21 01:54:33.660190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:28:49.775 [2024-11-21 01:54:33.660198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.775 [2024-11-21 01:54:33.660223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.775 [2024-11-21 01:54:33.660231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:49.775 [2024-11-21 01:54:33.660239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:49.775 [2024-11-21 01:54:33.660247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.775 [2024-11-21 01:54:33.660285] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:49.775 [2024-11-21 01:54:33.660295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.775 [2024-11-21 01:54:33.660304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:49.775 [2024-11-21 01:54:33.660312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:49.775 [2024-11-21 01:54:33.660320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.775 [2024-11-21 01:54:33.684922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.775 [2024-11-21 01:54:33.684967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:49.775 [2024-11-21 01:54:33.684980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.585 ms 00:28:49.775 [2024-11-21 01:54:33.684994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.775 [2024-11-21 01:54:33.685074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:49.775 [2024-11-21 01:54:33.685084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:49.775 [2024-11-21 01:54:33.685093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:49.775 [2024-11-21 01:54:33.685102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:49.775 [2024-11-21 01:54:33.686289] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 264.231 ms, result 0 00:28:51.162  [2024-11-21T01:54:36.066Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-21T01:54:37.055Z] Copying: 33/1024 [MB] (16 MBps) [2024-11-21T01:54:37.999Z] Copying: 44/1024 [MB] (11 MBps) [2024-11-21T01:54:38.944Z] Copying: 58/1024 [MB] (14 MBps) [2024-11-21T01:54:39.889Z] Copying: 73/1024 [MB] (15 MBps) [2024-11-21T01:54:41.278Z] Copying: 96/1024 [MB] (22 MBps) [2024-11-21T01:54:42.222Z] Copying: 118/1024 [MB] (22 MBps) [2024-11-21T01:54:43.168Z] Copying: 137/1024 [MB] (18 MBps) [2024-11-21T01:54:44.114Z] Copying: 155/1024 [MB] (18 MBps) [2024-11-21T01:54:45.061Z] Copying: 173/1024 [MB] (17 MBps) [2024-11-21T01:54:46.007Z] Copying: 185/1024 [MB] (11 MBps) [2024-11-21T01:54:46.952Z] Copying: 198/1024 [MB] (13 MBps) [2024-11-21T01:54:47.897Z] Copying: 216/1024 [MB] (17 MBps) [2024-11-21T01:54:49.287Z] Copying: 231/1024 [MB] (15 MBps) [2024-11-21T01:54:50.231Z] Copying: 246/1024 [MB] (15 MBps) [2024-11-21T01:54:51.176Z] Copying: 264/1024 [MB] (18 MBps) [2024-11-21T01:54:52.120Z] Copying: 283/1024 [MB] (19 MBps) [2024-11-21T01:54:53.064Z] Copying: 295/1024 [MB] (11 MBps) [2024-11-21T01:54:54.010Z] Copying: 306/1024 [MB] (10 MBps) [2024-11-21T01:54:54.957Z] Copying: 327/1024 [MB] (21 MBps) [2024-11-21T01:54:55.902Z] Copying: 343/1024 [MB] (16 MBps) [2024-11-21T01:54:57.290Z] Copying: 369/1024 [MB] (26 MBps) [2024-11-21T01:54:58.235Z] Copying: 390/1024 [MB] (20 MBps) [2024-11-21T01:54:59.180Z] Copying: 403/1024 [MB] (12 MBps) [2024-11-21T01:55:00.125Z] Copying: 415/1024 [MB] (11 MBps) [2024-11-21T01:55:01.071Z] Copying: 425/1024 [MB] (10 MBps) [2024-11-21T01:55:02.018Z] Copying: 436/1024 [MB] (11 MBps) [2024-11-21T01:55:02.961Z] Copying: 448/1024 [MB] (11 MBps) [2024-11-21T01:55:03.905Z] Copying: 460/1024 [MB] (12 MBps) [2024-11-21T01:55:05.292Z] Copying: 476/1024 [MB] (15 MBps) [2024-11-21T01:55:05.909Z] Copying: 487/1024 [MB] (11 MBps) [2024-11-21T01:55:06.878Z] Copying: 500/1024 [MB] (13 MBps) [2024-11-21T01:55:08.266Z] Copying: 512/1024 [MB] (11 MBps) [2024-11-21T01:55:09.208Z] Copying: 539/1024 [MB] (27 MBps) [2024-11-21T01:55:10.151Z] Copying: 553/1024 [MB] (14 MBps) [2024-11-21T01:55:11.093Z] Copying: 570/1024 [MB] (16 MBps) [2024-11-21T01:55:12.037Z] Copying: 587/1024 [MB] (17 MBps) [2024-11-21T01:55:12.982Z] Copying: 604/1024 [MB] (17 MBps) [2024-11-21T01:55:13.925Z] Copying: 624/1024 [MB] (19 MBps) [2024-11-21T01:55:14.870Z] Copying: 635/1024 [MB] (11 MBps) [2024-11-21T01:55:16.255Z] Copying: 645/1024 [MB] (10 MBps) [2024-11-21T01:55:17.200Z] Copying: 656/1024 [MB] (10 MBps) [2024-11-21T01:55:18.143Z] Copying: 669/1024 [MB] (12 MBps) [2024-11-21T01:55:19.085Z] Copying: 682/1024 [MB] (13 MBps) [2024-11-21T01:55:20.028Z] Copying: 697/1024 [MB] (14 MBps) [2024-11-21T01:55:20.972Z] Copying: 711/1024 [MB] (13 MBps) [2024-11-21T01:55:21.916Z] Copying: 724/1024 [MB] (13 MBps) [2024-11-21T01:55:23.304Z] Copying: 738/1024 [MB] (13 MBps) [2024-11-21T01:55:23.879Z] Copying: 756/1024 [MB] (18 MBps) [2024-11-21T01:55:25.268Z] Copying: 774/1024 [MB] (18 MBps) [2024-11-21T01:55:26.212Z] Copying: 790/1024 [MB] (15 MBps) [2024-11-21T01:55:27.156Z] Copying: 821/1024 [MB] (31 MBps) [2024-11-21T01:55:28.101Z] Copying: 837/1024 [MB] (15 MBps) [2024-11-21T01:55:29.047Z] Copying: 853/1024 [MB] (16 MBps) [2024-11-21T01:55:29.992Z] Copying: 866/1024 [MB] (12 MBps) [2024-11-21T01:55:30.935Z] Copying: 879/1024 [MB] (13 MBps) [2024-11-21T01:55:31.879Z] Copying: 893/1024 [MB] (13 MBps) [2024-11-21T01:55:33.264Z] Copying: 905/1024 [MB] (11 MBps) [2024-11-21T01:55:34.209Z] Copying: 923/1024 [MB] (17 MBps) [2024-11-21T01:55:35.207Z] Copying: 933/1024 [MB] (10 MBps) [2024-11-21T01:55:36.153Z] Copying: 944/1024 [MB] (10 MBps) [2024-11-21T01:55:37.097Z] Copying: 954/1024 [MB] (10 MBps) [2024-11-21T01:55:38.038Z] Copying: 965/1024 [MB] (10 MBps) [2024-11-21T01:55:38.983Z] Copying: 980/1024 [MB] (15 MBps) [2024-11-21T01:55:39.927Z] Copying: 991/1024 [MB] (11 MBps) [2024-11-21T01:55:40.871Z] Copying: 1005/1024 [MB] (13 MBps) [2024-11-21T01:55:41.443Z] Copying: 1019/1024 [MB] (14 MBps) [2024-11-21T01:55:41.443Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-21 01:55:41.281343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.486 [2024-11-21 01:55:41.281418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:57.486 [2024-11-21 01:55:41.281435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:57.486 [2024-11-21 01:55:41.281444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.486 [2024-11-21 01:55:41.281468] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:57.486 [2024-11-21 01:55:41.286974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.486 [2024-11-21 01:55:41.287030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:57.486 [2024-11-21 01:55:41.287056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.489 ms 00:29:57.486 [2024-11-21 01:55:41.287068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.486 [2024-11-21 01:55:41.287714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.486 [2024-11-21 01:55:41.287747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:57.486 [2024-11-21 01:55:41.287762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:29:57.486 [2024-11-21 01:55:41.287775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.486 [2024-11-21 01:55:41.293379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.486 [2024-11-21 01:55:41.293415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:57.486 [2024-11-21 01:55:41.293429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.584 ms 00:29:57.486 [2024-11-21 01:55:41.293446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.486 [2024-11-21 01:55:41.300123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.486 [2024-11-21 01:55:41.300166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:57.486 [2024-11-21 01:55:41.300177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.654 ms 00:29:57.486 [2024-11-21 01:55:41.300185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.486 [2024-11-21 01:55:41.326468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.486 [2024-11-21 01:55:41.326518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:57.486 [2024-11-21 01:55:41.326531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.220 ms 00:29:57.486 [2024-11-21 01:55:41.326539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.486 [2024-11-21 01:55:41.342639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.486 [2024-11-21 01:55:41.342688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:57.486 [2024-11-21 01:55:41.342700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.054 ms 00:29:57.486 [2024-11-21 01:55:41.342709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.486 [2024-11-21 01:55:41.347537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.486 [2024-11-21 01:55:41.347586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:57.486 [2024-11-21 01:55:41.347596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.771 ms 00:29:57.486 [2024-11-21 01:55:41.347605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.486 [2024-11-21 01:55:41.373151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.486 [2024-11-21 01:55:41.373199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:57.486 [2024-11-21 01:55:41.373211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.518 ms 00:29:57.486 [2024-11-21 01:55:41.373218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.486 [2024-11-21 01:55:41.398239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.486 [2024-11-21 01:55:41.398298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:57.486 [2024-11-21 01:55:41.398308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.977 ms 00:29:57.486 [2024-11-21 01:55:41.398315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.486 [2024-11-21 01:55:41.422423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.486 [2024-11-21 01:55:41.422470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:57.486 [2024-11-21 01:55:41.422482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.055 ms 00:29:57.486 [2024-11-21 01:55:41.422489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.748 [2024-11-21 01:55:41.446513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.748 [2024-11-21 01:55:41.446560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:57.748 [2024-11-21 01:55:41.446570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.953 ms 00:29:57.748 [2024-11-21 01:55:41.446577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.748 [2024-11-21 01:55:41.446629] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:57.748 [2024-11-21 01:55:41.446652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:57.748 [2024-11-21 01:55:41.446664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:57.748 [2024-11-21 01:55:41.446673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:57.748 [2024-11-21 01:55:41.446682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:57.748 [2024-11-21 01:55:41.446689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:57.748 [2024-11-21 01:55:41.446698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:57.748 [2024-11-21 01:55:41.446706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:57.748 [2024-11-21 01:55:41.446713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:57.748 [2024-11-21 01:55:41.446723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:57.748 [2024-11-21 01:55:41.446731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:57.748 [2024-11-21 01:55:41.446739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:57.748 [2024-11-21 01:55:41.446746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:57.748 [2024-11-21 01:55:41.446754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.446999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:57.749 [2024-11-21 01:55:41.447449] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:57.749 [2024-11-21 01:55:41.447460] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b817a20-1913-404f-8c83-c65c9c250ee0 00:29:57.749 [2024-11-21 01:55:41.447469] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:57.749 [2024-11-21 01:55:41.447477] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:57.749 [2024-11-21 01:55:41.447485] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:57.749 [2024-11-21 01:55:41.447495] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:57.749 [2024-11-21 01:55:41.447502] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:57.749 [2024-11-21 01:55:41.447510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:57.749 [2024-11-21 01:55:41.447524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:57.749 [2024-11-21 01:55:41.447531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:57.749 [2024-11-21 01:55:41.447537] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:57.749 [2024-11-21 01:55:41.447546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.749 [2024-11-21 01:55:41.447554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:57.749 [2024-11-21 01:55:41.447563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:29:57.749 [2024-11-21 01:55:41.447573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.749 [2024-11-21 01:55:41.460817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.749 [2024-11-21 01:55:41.460862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:57.749 [2024-11-21 01:55:41.460874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.225 ms 00:29:57.749 [2024-11-21 01:55:41.460882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.749 [2024-11-21 01:55:41.461280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.749 [2024-11-21 01:55:41.461309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:57.749 [2024-11-21 01:55:41.461319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:29:57.749 [2024-11-21 01:55:41.461327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.749 [2024-11-21 01:55:41.497565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.749 [2024-11-21 01:55:41.497629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:57.749 [2024-11-21 01:55:41.497642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.749 [2024-11-21 01:55:41.497652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.749 [2024-11-21 01:55:41.497712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.749 [2024-11-21 01:55:41.497727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:57.749 [2024-11-21 01:55:41.497736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.749 [2024-11-21 01:55:41.497745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.749 [2024-11-21 01:55:41.497828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.749 [2024-11-21 01:55:41.497841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:57.749 [2024-11-21 01:55:41.497850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.749 [2024-11-21 01:55:41.497859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.749 [2024-11-21 01:55:41.497875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.749 [2024-11-21 01:55:41.497884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:57.749 [2024-11-21 01:55:41.497897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.749 [2024-11-21 01:55:41.497905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.749 [2024-11-21 01:55:41.581157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.749 [2024-11-21 01:55:41.581217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:57.749 [2024-11-21 01:55:41.581230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.749 [2024-11-21 01:55:41.581239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.749 [2024-11-21 01:55:41.649475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.749 [2024-11-21 01:55:41.649531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:57.749 [2024-11-21 01:55:41.649549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.749 [2024-11-21 01:55:41.649558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.749 [2024-11-21 01:55:41.649637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.749 [2024-11-21 01:55:41.649649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:57.749 [2024-11-21 01:55:41.649659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.749 [2024-11-21 01:55:41.649668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.749 [2024-11-21 01:55:41.649723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.749 [2024-11-21 01:55:41.649734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:57.749 [2024-11-21 01:55:41.649744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.749 [2024-11-21 01:55:41.649756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.749 [2024-11-21 01:55:41.649860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.749 [2024-11-21 01:55:41.649871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:57.749 [2024-11-21 01:55:41.649880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.749 [2024-11-21 01:55:41.649888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.749 [2024-11-21 01:55:41.649919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.749 [2024-11-21 01:55:41.649930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:57.749 [2024-11-21 01:55:41.649940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.749 [2024-11-21 01:55:41.649947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.749 [2024-11-21 01:55:41.649992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.749 [2024-11-21 01:55:41.650002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:57.749 [2024-11-21 01:55:41.650011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.749 [2024-11-21 01:55:41.650019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.749 [2024-11-21 01:55:41.650064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:57.749 [2024-11-21 01:55:41.650075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:57.749 [2024-11-21 01:55:41.650084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:57.749 [2024-11-21 01:55:41.650094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.749 [2024-11-21 01:55:41.650231] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.849 ms, result 0 00:29:58.693 00:29:58.693 00:29:58.693 01:55:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:00.608 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:30:00.608 01:55:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:30:00.608 01:55:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:30:00.608 01:55:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:00.608 01:55:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:00.608 01:55:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:00.869 01:55:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:00.870 01:55:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:00.870 01:55:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 80255 00:30:00.870 01:55:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80255 ']' 00:30:00.870 01:55:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 80255 00:30:00.870 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80255) - No such process 00:30:00.870 Process with pid 80255 is not found 00:30:00.870 01:55:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 80255 is not found' 00:30:00.870 01:55:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:30:01.131 01:55:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:30:01.131 Remove shared memory files 00:30:01.131 01:55:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:01.131 01:55:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:01.131 01:55:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:01.131 01:55:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:30:01.131 01:55:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:01.131 01:55:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:01.131 00:30:01.131 real 4m4.243s 00:30:01.131 user 4m28.429s 00:30:01.131 sys 0m26.869s 00:30:01.131 01:55:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:01.131 ************************************ 00:30:01.131 END TEST ftl_dirty_shutdown 00:30:01.131 ************************************ 00:30:01.131 01:55:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:01.131 01:55:44 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:01.131 01:55:44 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:01.131 01:55:44 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:01.131 01:55:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:01.131 ************************************ 00:30:01.131 START TEST ftl_upgrade_shutdown 00:30:01.131 ************************************ 00:30:01.131 01:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:01.131 * Looking for test storage... 00:30:01.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:01.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.393 --rc genhtml_branch_coverage=1 00:30:01.393 --rc genhtml_function_coverage=1 00:30:01.393 --rc genhtml_legend=1 00:30:01.393 --rc geninfo_all_blocks=1 00:30:01.393 --rc geninfo_unexecuted_blocks=1 00:30:01.393 00:30:01.393 ' 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:01.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.393 --rc genhtml_branch_coverage=1 00:30:01.393 --rc genhtml_function_coverage=1 00:30:01.393 --rc genhtml_legend=1 00:30:01.393 --rc geninfo_all_blocks=1 00:30:01.393 --rc geninfo_unexecuted_blocks=1 00:30:01.393 00:30:01.393 ' 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:01.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.393 --rc genhtml_branch_coverage=1 00:30:01.393 --rc genhtml_function_coverage=1 00:30:01.393 --rc genhtml_legend=1 00:30:01.393 --rc geninfo_all_blocks=1 00:30:01.393 --rc geninfo_unexecuted_blocks=1 00:30:01.393 00:30:01.393 ' 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:01.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.393 --rc genhtml_branch_coverage=1 00:30:01.393 --rc genhtml_function_coverage=1 00:30:01.393 --rc genhtml_legend=1 00:30:01.393 --rc geninfo_all_blocks=1 00:30:01.393 --rc geninfo_unexecuted_blocks=1 00:30:01.393 00:30:01.393 ' 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82880 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82880 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82880 ']' 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.393 01:55:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:01.393 [2024-11-21 01:55:45.314374] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:01.394 [2024-11-21 01:55:45.315457] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82880 ] 00:30:01.654 [2024-11-21 01:55:45.497999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.916 [2024-11-21 01:55:45.626540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:02.488 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:30:02.749 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:30:02.749 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:02.749 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:30:02.749 01:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:30:02.750 01:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:02.750 01:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:02.750 01:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:02.750 01:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:30:03.012 01:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:03.012 { 00:30:03.012 "name": "basen1", 00:30:03.012 "aliases": [ 00:30:03.012 "58e7b973-62fe-41c4-995c-9933a3b9e0e7" 00:30:03.012 ], 00:30:03.012 "product_name": "NVMe disk", 00:30:03.012 "block_size": 4096, 00:30:03.012 "num_blocks": 1310720, 00:30:03.012 "uuid": "58e7b973-62fe-41c4-995c-9933a3b9e0e7", 00:30:03.012 "numa_id": -1, 00:30:03.012 "assigned_rate_limits": { 00:30:03.012 "rw_ios_per_sec": 0, 00:30:03.012 "rw_mbytes_per_sec": 0, 00:30:03.012 "r_mbytes_per_sec": 0, 00:30:03.012 "w_mbytes_per_sec": 0 00:30:03.012 }, 00:30:03.012 "claimed": true, 00:30:03.012 "claim_type": "read_many_write_one", 00:30:03.012 "zoned": false, 00:30:03.012 "supported_io_types": { 00:30:03.012 "read": true, 00:30:03.012 "write": true, 00:30:03.012 "unmap": true, 00:30:03.012 "flush": true, 00:30:03.012 "reset": true, 00:30:03.012 "nvme_admin": true, 00:30:03.012 "nvme_io": true, 00:30:03.012 "nvme_io_md": false, 00:30:03.012 "write_zeroes": true, 00:30:03.012 "zcopy": false, 00:30:03.012 "get_zone_info": false, 00:30:03.012 "zone_management": false, 00:30:03.012 "zone_append": false, 00:30:03.012 "compare": true, 00:30:03.012 "compare_and_write": false, 00:30:03.012 "abort": true, 00:30:03.012 "seek_hole": false, 00:30:03.012 "seek_data": false, 00:30:03.012 "copy": true, 00:30:03.012 "nvme_iov_md": false 00:30:03.012 }, 00:30:03.012 "driver_specific": { 00:30:03.012 "nvme": [ 00:30:03.012 { 00:30:03.012 "pci_address": "0000:00:11.0", 00:30:03.012 "trid": { 00:30:03.012 "trtype": "PCIe", 00:30:03.012 "traddr": "0000:00:11.0" 00:30:03.012 }, 00:30:03.012 "ctrlr_data": { 00:30:03.012 "cntlid": 0, 00:30:03.012 "vendor_id": "0x1b36", 00:30:03.012 "model_number": "QEMU NVMe Ctrl", 00:30:03.012 "serial_number": "12341", 00:30:03.012 "firmware_revision": "8.0.0", 00:30:03.012 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:03.012 "oacs": { 00:30:03.012 "security": 0, 00:30:03.012 "format": 1, 00:30:03.012 "firmware": 0, 00:30:03.012 "ns_manage": 1 00:30:03.012 }, 00:30:03.012 "multi_ctrlr": false, 00:30:03.012 "ana_reporting": false 00:30:03.012 }, 00:30:03.012 "vs": { 00:30:03.012 "nvme_version": "1.4" 00:30:03.012 }, 00:30:03.012 "ns_data": { 00:30:03.012 "id": 1, 00:30:03.012 "can_share": false 00:30:03.012 } 00:30:03.012 } 00:30:03.012 ], 00:30:03.012 "mp_policy": "active_passive" 00:30:03.012 } 00:30:03.012 } 00:30:03.012 ]' 00:30:03.012 01:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:03.012 01:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:03.012 01:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:03.012 01:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:03.012 01:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:03.012 01:55:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:30:03.012 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:03.012 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:30:03.012 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:03.012 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:03.012 01:55:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:03.273 01:55:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=5dc85a8b-0ef0-4ec0-9e96-3da7455f2152 00:30:03.273 01:55:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:03.273 01:55:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5dc85a8b-0ef0-4ec0-9e96-3da7455f2152 00:30:03.535 01:55:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:30:03.796 01:55:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=8038d3ae-a187-4f2b-ba06-815002938da2 00:30:03.796 01:55:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 8038d3ae-a187-4f2b-ba06-815002938da2 00:30:04.058 01:55:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=046e1e61-661b-429d-b606-9f97d2497ad2 00:30:04.058 01:55:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 046e1e61-661b-429d-b606-9f97d2497ad2 ]] 00:30:04.058 01:55:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 046e1e61-661b-429d-b606-9f97d2497ad2 5120 00:30:04.058 01:55:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:30:04.058 01:55:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:04.058 01:55:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=046e1e61-661b-429d-b606-9f97d2497ad2 00:30:04.058 01:55:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:30:04.058 01:55:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 046e1e61-661b-429d-b606-9f97d2497ad2 00:30:04.058 01:55:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=046e1e61-661b-429d-b606-9f97d2497ad2 00:30:04.058 01:55:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:04.058 01:55:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:04.058 01:55:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:04.058 01:55:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 046e1e61-661b-429d-b606-9f97d2497ad2 00:30:04.319 01:55:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:04.319 { 00:30:04.319 "name": "046e1e61-661b-429d-b606-9f97d2497ad2", 00:30:04.319 "aliases": [ 00:30:04.319 "lvs/basen1p0" 00:30:04.319 ], 00:30:04.319 "product_name": "Logical Volume", 00:30:04.319 "block_size": 4096, 00:30:04.319 "num_blocks": 5242880, 00:30:04.319 "uuid": "046e1e61-661b-429d-b606-9f97d2497ad2", 00:30:04.319 "assigned_rate_limits": { 00:30:04.319 "rw_ios_per_sec": 0, 00:30:04.319 "rw_mbytes_per_sec": 0, 00:30:04.319 "r_mbytes_per_sec": 0, 00:30:04.319 "w_mbytes_per_sec": 0 00:30:04.319 }, 00:30:04.319 "claimed": false, 00:30:04.319 "zoned": false, 00:30:04.319 "supported_io_types": { 00:30:04.319 "read": true, 00:30:04.319 "write": true, 00:30:04.319 "unmap": true, 00:30:04.319 "flush": false, 00:30:04.319 "reset": true, 00:30:04.319 "nvme_admin": false, 00:30:04.319 "nvme_io": false, 00:30:04.319 "nvme_io_md": false, 00:30:04.319 "write_zeroes": true, 00:30:04.319 "zcopy": false, 00:30:04.319 "get_zone_info": false, 00:30:04.319 "zone_management": false, 00:30:04.319 "zone_append": false, 00:30:04.319 "compare": false, 00:30:04.319 "compare_and_write": false, 00:30:04.319 "abort": false, 00:30:04.319 "seek_hole": true, 00:30:04.319 "seek_data": true, 00:30:04.319 "copy": false, 00:30:04.319 "nvme_iov_md": false 00:30:04.319 }, 00:30:04.319 "driver_specific": { 00:30:04.319 "lvol": { 00:30:04.319 "lvol_store_uuid": "8038d3ae-a187-4f2b-ba06-815002938da2", 00:30:04.319 "base_bdev": "basen1", 00:30:04.319 "thin_provision": true, 00:30:04.319 "num_allocated_clusters": 0, 00:30:04.319 "snapshot": false, 00:30:04.319 "clone": false, 00:30:04.319 "esnap_clone": false 00:30:04.319 } 00:30:04.319 } 00:30:04.319 } 00:30:04.319 ]' 00:30:04.319 01:55:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:04.319 01:55:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:04.319 01:55:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:04.319 01:55:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:30:04.319 01:55:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:30:04.319 01:55:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:30:04.319 01:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:30:04.319 01:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:30:04.319 01:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:30:04.581 01:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:30:04.581 01:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:30:04.581 01:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:30:04.843 01:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:30:04.843 01:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:30:04.843 01:55:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 046e1e61-661b-429d-b606-9f97d2497ad2 -c cachen1p0 --l2p_dram_limit 2 00:30:04.843 [2024-11-21 01:55:48.751562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.843 [2024-11-21 01:55:48.751741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:04.843 [2024-11-21 01:55:48.751761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:04.843 [2024-11-21 01:55:48.751768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.843 [2024-11-21 01:55:48.751827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.843 [2024-11-21 01:55:48.751835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:04.843 [2024-11-21 01:55:48.751843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:30:04.843 [2024-11-21 01:55:48.751849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.843 [2024-11-21 01:55:48.751867] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:04.843 [2024-11-21 01:55:48.752433] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:04.843 [2024-11-21 01:55:48.752449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.843 [2024-11-21 01:55:48.752456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:04.843 [2024-11-21 01:55:48.752464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.585 ms 00:30:04.843 [2024-11-21 01:55:48.752470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.843 [2024-11-21 01:55:48.752526] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 1d88891f-7bbe-4830-9ae6-f803e7acd276 00:30:04.843 [2024-11-21 01:55:48.753580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.843 [2024-11-21 01:55:48.753600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:30:04.843 [2024-11-21 01:55:48.753608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:04.843 [2024-11-21 01:55:48.753627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.843 [2024-11-21 01:55:48.758846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.843 [2024-11-21 01:55:48.758971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:04.843 [2024-11-21 01:55:48.758984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.181 ms 00:30:04.843 [2024-11-21 01:55:48.758991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.843 [2024-11-21 01:55:48.759023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.843 [2024-11-21 01:55:48.759032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:04.843 [2024-11-21 01:55:48.759039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:04.843 [2024-11-21 01:55:48.759050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.843 [2024-11-21 01:55:48.759081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.843 [2024-11-21 01:55:48.759089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:04.843 [2024-11-21 01:55:48.759098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:04.843 [2024-11-21 01:55:48.759105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.843 [2024-11-21 01:55:48.759123] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:04.843 [2024-11-21 01:55:48.762131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.843 [2024-11-21 01:55:48.762234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:04.843 [2024-11-21 01:55:48.762251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.011 ms 00:30:04.843 [2024-11-21 01:55:48.762257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.843 [2024-11-21 01:55:48.762282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.843 [2024-11-21 01:55:48.762288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:04.843 [2024-11-21 01:55:48.762296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:04.843 [2024-11-21 01:55:48.762302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.843 [2024-11-21 01:55:48.762316] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:30:04.843 [2024-11-21 01:55:48.762428] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:04.843 [2024-11-21 01:55:48.762441] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:04.843 [2024-11-21 01:55:48.762449] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:04.843 [2024-11-21 01:55:48.762458] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:04.843 [2024-11-21 01:55:48.762465] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:04.844 [2024-11-21 01:55:48.762473] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:04.844 [2024-11-21 01:55:48.762480] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:04.844 [2024-11-21 01:55:48.762488] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:04.844 [2024-11-21 01:55:48.762493] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:04.844 [2024-11-21 01:55:48.762501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.844 [2024-11-21 01:55:48.762506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:04.844 [2024-11-21 01:55:48.762514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.186 ms 00:30:04.844 [2024-11-21 01:55:48.762519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.844 [2024-11-21 01:55:48.762584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.844 [2024-11-21 01:55:48.762590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:04.844 [2024-11-21 01:55:48.762598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:30:04.844 [2024-11-21 01:55:48.762608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.844 [2024-11-21 01:55:48.762709] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:04.844 [2024-11-21 01:55:48.762716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:04.844 [2024-11-21 01:55:48.762724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:04.844 [2024-11-21 01:55:48.762730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:04.844 [2024-11-21 01:55:48.762737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:04.844 [2024-11-21 01:55:48.762742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:04.844 [2024-11-21 01:55:48.762748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:04.844 [2024-11-21 01:55:48.762754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:04.844 [2024-11-21 01:55:48.762760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:04.844 [2024-11-21 01:55:48.762765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:04.844 [2024-11-21 01:55:48.762772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:04.844 [2024-11-21 01:55:48.762777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:04.844 [2024-11-21 01:55:48.762784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:04.844 [2024-11-21 01:55:48.762790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:04.844 [2024-11-21 01:55:48.762797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:04.844 [2024-11-21 01:55:48.762801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:04.844 [2024-11-21 01:55:48.762810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:04.844 [2024-11-21 01:55:48.762815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:04.844 [2024-11-21 01:55:48.762823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:04.844 [2024-11-21 01:55:48.762828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:04.844 [2024-11-21 01:55:48.762835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:04.844 [2024-11-21 01:55:48.762841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:04.844 [2024-11-21 01:55:48.762847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:04.844 [2024-11-21 01:55:48.762852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:04.844 [2024-11-21 01:55:48.762859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:04.844 [2024-11-21 01:55:48.762864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:04.844 [2024-11-21 01:55:48.762870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:04.844 [2024-11-21 01:55:48.762875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:04.844 [2024-11-21 01:55:48.762881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:04.844 [2024-11-21 01:55:48.762886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:04.844 [2024-11-21 01:55:48.762892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:04.844 [2024-11-21 01:55:48.762897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:04.844 [2024-11-21 01:55:48.762905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:04.844 [2024-11-21 01:55:48.762910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:04.844 [2024-11-21 01:55:48.762916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:04.844 [2024-11-21 01:55:48.762921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:04.844 [2024-11-21 01:55:48.762928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:04.844 [2024-11-21 01:55:48.762933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:04.844 [2024-11-21 01:55:48.762939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:04.844 [2024-11-21 01:55:48.762944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:04.844 [2024-11-21 01:55:48.762950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:04.844 [2024-11-21 01:55:48.762955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:04.844 [2024-11-21 01:55:48.762961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:04.844 [2024-11-21 01:55:48.762966] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:04.844 [2024-11-21 01:55:48.762973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:04.844 [2024-11-21 01:55:48.762979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:04.844 [2024-11-21 01:55:48.762986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:04.844 [2024-11-21 01:55:48.762992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:04.844 [2024-11-21 01:55:48.763000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:04.844 [2024-11-21 01:55:48.763005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:04.844 [2024-11-21 01:55:48.763012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:04.844 [2024-11-21 01:55:48.763017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:04.844 [2024-11-21 01:55:48.763024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:04.844 [2024-11-21 01:55:48.763033] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:04.844 [2024-11-21 01:55:48.763043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:04.844 [2024-11-21 01:55:48.763050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:04.844 [2024-11-21 01:55:48.763056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:04.844 [2024-11-21 01:55:48.763062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:04.844 [2024-11-21 01:55:48.763069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:04.844 [2024-11-21 01:55:48.763075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:04.844 [2024-11-21 01:55:48.763081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:04.844 [2024-11-21 01:55:48.763087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:04.844 [2024-11-21 01:55:48.763093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:04.844 [2024-11-21 01:55:48.763099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:04.844 [2024-11-21 01:55:48.763107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:04.844 [2024-11-21 01:55:48.763112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:04.844 [2024-11-21 01:55:48.763124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:04.844 [2024-11-21 01:55:48.763129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:04.844 [2024-11-21 01:55:48.763137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:04.844 [2024-11-21 01:55:48.763142] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:04.844 [2024-11-21 01:55:48.763150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:04.844 [2024-11-21 01:55:48.763156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:04.844 [2024-11-21 01:55:48.763163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:04.844 [2024-11-21 01:55:48.763169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:04.844 [2024-11-21 01:55:48.763175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:04.844 [2024-11-21 01:55:48.763181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.844 [2024-11-21 01:55:48.763189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:04.844 [2024-11-21 01:55:48.763195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.527 ms 00:30:04.844 [2024-11-21 01:55:48.763202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.844 [2024-11-21 01:55:48.763246] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:04.844 [2024-11-21 01:55:48.763257] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:09.054 [2024-11-21 01:55:52.159036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.054 [2024-11-21 01:55:52.159126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:09.054 [2024-11-21 01:55:52.159146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3395.773 ms 00:30:09.054 [2024-11-21 01:55:52.159158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.054 [2024-11-21 01:55:52.190337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.054 [2024-11-21 01:55:52.190413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:09.054 [2024-11-21 01:55:52.190428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.934 ms 00:30:09.054 [2024-11-21 01:55:52.190439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.054 [2024-11-21 01:55:52.190526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.054 [2024-11-21 01:55:52.190540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:09.054 [2024-11-21 01:55:52.190549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:09.054 [2024-11-21 01:55:52.190565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.054 [2024-11-21 01:55:52.225825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.054 [2024-11-21 01:55:52.225873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:09.054 [2024-11-21 01:55:52.225886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.174 ms 00:30:09.054 [2024-11-21 01:55:52.225897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.054 [2024-11-21 01:55:52.225934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.054 [2024-11-21 01:55:52.225945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:09.054 [2024-11-21 01:55:52.225955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:09.054 [2024-11-21 01:55:52.225965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.054 [2024-11-21 01:55:52.226578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.054 [2024-11-21 01:55:52.226606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:09.054 [2024-11-21 01:55:52.226652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.561 ms 00:30:09.054 [2024-11-21 01:55:52.226664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.054 [2024-11-21 01:55:52.226719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.054 [2024-11-21 01:55:52.226732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:09.054 [2024-11-21 01:55:52.226742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:09.054 [2024-11-21 01:55:52.226755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.054 [2024-11-21 01:55:52.243988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.054 [2024-11-21 01:55:52.244037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:09.054 [2024-11-21 01:55:52.244049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.213 ms 00:30:09.054 [2024-11-21 01:55:52.244059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.054 [2024-11-21 01:55:52.257064] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:09.054 [2024-11-21 01:55:52.258601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.054 [2024-11-21 01:55:52.258659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:09.054 [2024-11-21 01:55:52.258674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.455 ms 00:30:09.054 [2024-11-21 01:55:52.258682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.054 [2024-11-21 01:55:52.295189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.054 [2024-11-21 01:55:52.295242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:30:09.054 [2024-11-21 01:55:52.295261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.469 ms 00:30:09.054 [2024-11-21 01:55:52.295270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.054 [2024-11-21 01:55:52.295377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.054 [2024-11-21 01:55:52.295392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:09.054 [2024-11-21 01:55:52.295407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:09.054 [2024-11-21 01:55:52.295416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.054 [2024-11-21 01:55:52.320713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.054 [2024-11-21 01:55:52.320897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:30:09.054 [2024-11-21 01:55:52.320926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.222 ms 00:30:09.054 [2024-11-21 01:55:52.320935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.054 [2024-11-21 01:55:52.346347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.054 [2024-11-21 01:55:52.346398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:30:09.054 [2024-11-21 01:55:52.346414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.358 ms 00:30:09.054 [2024-11-21 01:55:52.346421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.054 [2024-11-21 01:55:52.347033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.054 [2024-11-21 01:55:52.347059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:09.054 [2024-11-21 01:55:52.347072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.563 ms 00:30:09.054 [2024-11-21 01:55:52.347083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.054 [2024-11-21 01:55:52.432503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.054 [2024-11-21 01:55:52.432704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:30:09.054 [2024-11-21 01:55:52.432736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 85.371 ms 00:30:09.054 [2024-11-21 01:55:52.432746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.054 [2024-11-21 01:55:52.459778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.054 [2024-11-21 01:55:52.459964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:30:09.054 [2024-11-21 01:55:52.460000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.907 ms 00:30:09.054 [2024-11-21 01:55:52.460010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.054 [2024-11-21 01:55:52.485832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.054 [2024-11-21 01:55:52.485880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:30:09.054 [2024-11-21 01:55:52.485894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.774 ms 00:30:09.054 [2024-11-21 01:55:52.485902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.054 [2024-11-21 01:55:52.511778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.054 [2024-11-21 01:55:52.511827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:09.054 [2024-11-21 01:55:52.511842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.821 ms 00:30:09.054 [2024-11-21 01:55:52.511849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.054 [2024-11-21 01:55:52.511903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.054 [2024-11-21 01:55:52.511912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:09.054 [2024-11-21 01:55:52.511927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:09.054 [2024-11-21 01:55:52.511935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.054 [2024-11-21 01:55:52.512028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.054 [2024-11-21 01:55:52.512042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:09.054 [2024-11-21 01:55:52.512053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:30:09.054 [2024-11-21 01:55:52.512061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.054 [2024-11-21 01:55:52.513216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3761.156 ms, result 0 00:30:09.054 { 00:30:09.054 "name": "ftl", 00:30:09.054 "uuid": "1d88891f-7bbe-4830-9ae6-f803e7acd276" 00:30:09.054 } 00:30:09.054 01:55:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:30:09.054 [2024-11-21 01:55:52.736508] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.054 01:55:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:30:09.054 01:55:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:30:09.316 [2024-11-21 01:55:53.164996] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:09.316 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:30:09.577 [2024-11-21 01:55:53.374511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:09.577 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:09.839 Fill FTL, iteration 1 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83007 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83007 /var/tmp/spdk.tgt.sock 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83007 ']' 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.839 01:55:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:10.100 [2024-11-21 01:55:53.822729] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:10.101 [2024-11-21 01:55:53.823164] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83007 ] 00:30:10.101 [2024-11-21 01:55:53.983766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.361 [2024-11-21 01:55:54.103960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.935 01:55:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:10.935 01:55:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:10.935 01:55:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:30:11.196 ftln1 00:30:11.196 01:55:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:30:11.196 01:55:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:30:11.459 01:55:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:30:11.459 01:55:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83007 00:30:11.459 01:55:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83007 ']' 00:30:11.459 01:55:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83007 00:30:11.459 01:55:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:11.459 01:55:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:11.459 01:55:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83007 00:30:11.459 killing process with pid 83007 00:30:11.459 01:55:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:11.459 01:55:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:11.459 01:55:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83007' 00:30:11.459 01:55:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83007 00:30:11.459 01:55:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83007 00:30:13.376 01:55:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:13.376 01:55:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:13.376 [2024-11-21 01:55:56.931369] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:13.376 [2024-11-21 01:55:56.931484] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83055 ] 00:30:13.376 [2024-11-21 01:55:57.089538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.376 [2024-11-21 01:55:57.193892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.765  [2024-11-21T01:55:59.661Z] Copying: 190/1024 [MB] (190 MBps) [2024-11-21T01:56:00.597Z] Copying: 377/1024 [MB] (187 MBps) [2024-11-21T01:56:01.971Z] Copying: 621/1024 [MB] (244 MBps) [2024-11-21T01:56:02.538Z] Copying: 865/1024 [MB] (244 MBps) [2024-11-21T01:56:03.109Z] Copying: 1024/1024 [MB] (average 217 MBps) 00:30:19.152 00:30:19.152 Calculate MD5 checksum, iteration 1 00:30:19.152 01:56:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:19.152 01:56:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:19.152 01:56:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:19.152 01:56:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:19.152 01:56:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:19.152 01:56:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:19.152 01:56:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:19.152 01:56:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:19.152 [2024-11-21 01:56:03.009428] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:19.152 [2024-11-21 01:56:03.009583] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83119 ] 00:30:19.414 [2024-11-21 01:56:03.172049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.414 [2024-11-21 01:56:03.268658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:20.870  [2024-11-21T01:56:05.399Z] Copying: 620/1024 [MB] (620 MBps) [2024-11-21T01:56:05.971Z] Copying: 1024/1024 [MB] (average 573 MBps) 00:30:22.014 00:30:22.014 01:56:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:22.014 01:56:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:24.560 01:56:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:24.560 Fill FTL, iteration 2 00:30:24.560 01:56:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=4533b5e0d37e9897e0dd1d5ed5c64a24 00:30:24.560 01:56:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:24.560 01:56:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:24.560 01:56:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:24.560 01:56:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:24.560 01:56:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:24.560 01:56:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:24.560 01:56:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:24.560 01:56:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:24.560 01:56:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:24.560 [2024-11-21 01:56:08.208216] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:24.560 [2024-11-21 01:56:08.208487] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83175 ] 00:30:24.560 [2024-11-21 01:56:08.363973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.560 [2024-11-21 01:56:08.438263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.944  [2024-11-21T01:56:10.843Z] Copying: 254/1024 [MB] (254 MBps) [2024-11-21T01:56:11.788Z] Copying: 518/1024 [MB] (264 MBps) [2024-11-21T01:56:12.728Z] Copying: 773/1024 [MB] (255 MBps) [2024-11-21T01:56:13.301Z] Copying: 1024/1024 [MB] (average 258 MBps) 00:30:29.344 00:30:29.344 Calculate MD5 checksum, iteration 2 00:30:29.344 01:56:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:29.344 01:56:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:29.344 01:56:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:29.344 01:56:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:29.344 01:56:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:29.344 01:56:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:29.344 01:56:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:29.344 01:56:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:29.603 [2024-11-21 01:56:13.326882] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:29.603 [2024-11-21 01:56:13.327161] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83228 ] 00:30:29.603 [2024-11-21 01:56:13.483084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.861 [2024-11-21 01:56:13.561269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.246  [2024-11-21T01:56:15.773Z] Copying: 658/1024 [MB] (658 MBps) [2024-11-21T01:56:16.708Z] Copying: 1024/1024 [MB] (average 665 MBps) 00:30:32.751 00:30:32.751 01:56:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:32.751 01:56:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:34.666 01:56:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:34.666 01:56:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b193e250d738028cf085de6fff73e734 00:30:34.666 01:56:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:34.666 01:56:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:34.666 01:56:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:34.666 [2024-11-21 01:56:18.497662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.666 [2024-11-21 01:56:18.497712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:34.666 [2024-11-21 01:56:18.497724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:34.666 [2024-11-21 01:56:18.497731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.666 [2024-11-21 01:56:18.497750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.666 [2024-11-21 01:56:18.497757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:34.666 [2024-11-21 01:56:18.497767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:34.666 [2024-11-21 01:56:18.497772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.666 [2024-11-21 01:56:18.497789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.666 [2024-11-21 01:56:18.497796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:34.666 [2024-11-21 01:56:18.497802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:34.666 [2024-11-21 01:56:18.497808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.666 [2024-11-21 01:56:18.497858] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.186 ms, result 0 00:30:34.666 true 00:30:34.666 01:56:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:34.924 { 00:30:34.924 "name": "ftl", 00:30:34.924 "properties": [ 00:30:34.924 { 00:30:34.924 "name": "superblock_version", 00:30:34.924 "value": 5, 00:30:34.924 "read-only": true 00:30:34.924 }, 00:30:34.924 { 00:30:34.924 "name": "base_device", 00:30:34.924 "bands": [ 00:30:34.924 { 00:30:34.924 "id": 0, 00:30:34.924 "state": "FREE", 00:30:34.924 "validity": 0.0 00:30:34.924 }, 00:30:34.924 { 00:30:34.924 "id": 1, 00:30:34.924 "state": "FREE", 00:30:34.924 "validity": 0.0 00:30:34.924 }, 00:30:34.924 { 00:30:34.924 "id": 2, 00:30:34.924 "state": "FREE", 00:30:34.924 "validity": 0.0 00:30:34.924 }, 00:30:34.924 { 00:30:34.924 "id": 3, 00:30:34.924 "state": "FREE", 00:30:34.924 "validity": 0.0 00:30:34.924 }, 00:30:34.924 { 00:30:34.924 "id": 4, 00:30:34.924 "state": "FREE", 00:30:34.924 "validity": 0.0 00:30:34.924 }, 00:30:34.924 { 00:30:34.924 "id": 5, 00:30:34.924 "state": "FREE", 00:30:34.924 "validity": 0.0 00:30:34.924 }, 00:30:34.924 { 00:30:34.924 "id": 6, 00:30:34.924 "state": "FREE", 00:30:34.924 "validity": 0.0 00:30:34.924 }, 00:30:34.924 { 00:30:34.924 "id": 7, 00:30:34.924 "state": "FREE", 00:30:34.924 "validity": 0.0 00:30:34.924 }, 00:30:34.924 { 00:30:34.924 "id": 8, 00:30:34.924 "state": "FREE", 00:30:34.924 "validity": 0.0 00:30:34.924 }, 00:30:34.924 { 00:30:34.924 "id": 9, 00:30:34.924 "state": "FREE", 00:30:34.924 "validity": 0.0 00:30:34.924 }, 00:30:34.924 { 00:30:34.924 "id": 10, 00:30:34.924 "state": "FREE", 00:30:34.924 "validity": 0.0 00:30:34.924 }, 00:30:34.924 { 00:30:34.924 "id": 11, 00:30:34.924 "state": "FREE", 00:30:34.924 "validity": 0.0 00:30:34.924 }, 00:30:34.924 { 00:30:34.924 "id": 12, 00:30:34.924 "state": "FREE", 00:30:34.924 "validity": 0.0 00:30:34.924 }, 00:30:34.924 { 00:30:34.924 "id": 13, 00:30:34.924 "state": "FREE", 00:30:34.924 "validity": 0.0 00:30:34.924 }, 00:30:34.924 { 00:30:34.924 "id": 14, 00:30:34.924 "state": "FREE", 00:30:34.924 "validity": 0.0 00:30:34.924 }, 00:30:34.924 { 00:30:34.924 "id": 15, 00:30:34.924 "state": "FREE", 00:30:34.924 "validity": 0.0 00:30:34.924 }, 00:30:34.924 { 00:30:34.924 "id": 16, 00:30:34.925 "state": "FREE", 00:30:34.925 "validity": 0.0 00:30:34.925 }, 00:30:34.925 { 00:30:34.925 "id": 17, 00:30:34.925 "state": "FREE", 00:30:34.925 "validity": 0.0 00:30:34.925 } 00:30:34.925 ], 00:30:34.925 "read-only": true 00:30:34.925 }, 00:30:34.925 { 00:30:34.925 "name": "cache_device", 00:30:34.925 "type": "bdev", 00:30:34.925 "chunks": [ 00:30:34.925 { 00:30:34.925 "id": 0, 00:30:34.925 "state": "INACTIVE", 00:30:34.925 "utilization": 0.0 00:30:34.925 }, 00:30:34.925 { 00:30:34.925 "id": 1, 00:30:34.925 "state": "CLOSED", 00:30:34.925 "utilization": 1.0 00:30:34.925 }, 00:30:34.925 { 00:30:34.925 "id": 2, 00:30:34.925 "state": "CLOSED", 00:30:34.925 "utilization": 1.0 00:30:34.925 }, 00:30:34.925 { 00:30:34.925 "id": 3, 00:30:34.925 "state": "OPEN", 00:30:34.925 "utilization": 0.001953125 00:30:34.925 }, 00:30:34.925 { 00:30:34.925 "id": 4, 00:30:34.925 "state": "OPEN", 00:30:34.925 "utilization": 0.0 00:30:34.925 } 00:30:34.925 ], 00:30:34.925 "read-only": true 00:30:34.925 }, 00:30:34.925 { 00:30:34.925 "name": "verbose_mode", 00:30:34.925 "value": true, 00:30:34.925 "unit": "", 00:30:34.925 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:34.925 }, 00:30:34.925 { 00:30:34.925 "name": "prep_upgrade_on_shutdown", 00:30:34.925 "value": false, 00:30:34.925 "unit": "", 00:30:34.925 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:34.925 } 00:30:34.925 ] 00:30:34.925 } 00:30:34.925 01:56:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:35.183 [2024-11-21 01:56:18.901960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.183 [2024-11-21 01:56:18.901997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:35.183 [2024-11-21 01:56:18.902006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:35.183 [2024-11-21 01:56:18.902012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.183 [2024-11-21 01:56:18.902029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.183 [2024-11-21 01:56:18.902035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:35.183 [2024-11-21 01:56:18.902041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:35.183 [2024-11-21 01:56:18.902047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.183 [2024-11-21 01:56:18.902062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.183 [2024-11-21 01:56:18.902069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:35.183 [2024-11-21 01:56:18.902075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:35.183 [2024-11-21 01:56:18.902080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.183 [2024-11-21 01:56:18.902124] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.154 ms, result 0 00:30:35.183 true 00:30:35.183 01:56:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:35.183 01:56:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:35.183 01:56:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:35.183 01:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:35.184 01:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:35.184 01:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:35.442 [2024-11-21 01:56:19.234228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.442 [2024-11-21 01:56:19.234355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:35.442 [2024-11-21 01:56:19.234459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:35.442 [2024-11-21 01:56:19.234477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.442 [2024-11-21 01:56:19.234506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.442 [2024-11-21 01:56:19.234522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:35.442 [2024-11-21 01:56:19.234537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:35.442 [2024-11-21 01:56:19.234550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.442 [2024-11-21 01:56:19.234573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:35.442 [2024-11-21 01:56:19.234696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:35.442 [2024-11-21 01:56:19.234712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:35.442 [2024-11-21 01:56:19.234726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:35.442 [2024-11-21 01:56:19.234777] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.534 ms, result 0 00:30:35.442 true 00:30:35.442 01:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:35.700 { 00:30:35.700 "name": "ftl", 00:30:35.700 "properties": [ 00:30:35.700 { 00:30:35.700 "name": "superblock_version", 00:30:35.700 "value": 5, 00:30:35.700 "read-only": true 00:30:35.700 }, 00:30:35.700 { 00:30:35.700 "name": "base_device", 00:30:35.700 "bands": [ 00:30:35.700 { 00:30:35.700 "id": 0, 00:30:35.700 "state": "FREE", 00:30:35.700 "validity": 0.0 00:30:35.700 }, 00:30:35.700 { 00:30:35.700 "id": 1, 00:30:35.700 "state": "FREE", 00:30:35.700 "validity": 0.0 00:30:35.700 }, 00:30:35.700 { 00:30:35.700 "id": 2, 00:30:35.700 "state": "FREE", 00:30:35.700 "validity": 0.0 00:30:35.700 }, 00:30:35.700 { 00:30:35.700 "id": 3, 00:30:35.700 "state": "FREE", 00:30:35.700 "validity": 0.0 00:30:35.700 }, 00:30:35.700 { 00:30:35.700 "id": 4, 00:30:35.700 "state": "FREE", 00:30:35.700 "validity": 0.0 00:30:35.700 }, 00:30:35.700 { 00:30:35.700 "id": 5, 00:30:35.700 "state": "FREE", 00:30:35.700 "validity": 0.0 00:30:35.700 }, 00:30:35.700 { 00:30:35.700 "id": 6, 00:30:35.700 "state": "FREE", 00:30:35.700 "validity": 0.0 00:30:35.700 }, 00:30:35.700 { 00:30:35.700 "id": 7, 00:30:35.700 "state": "FREE", 00:30:35.700 "validity": 0.0 00:30:35.700 }, 00:30:35.700 { 00:30:35.700 "id": 8, 00:30:35.700 "state": "FREE", 00:30:35.701 "validity": 0.0 00:30:35.701 }, 00:30:35.701 { 00:30:35.701 "id": 9, 00:30:35.701 "state": "FREE", 00:30:35.701 "validity": 0.0 00:30:35.701 }, 00:30:35.701 { 00:30:35.701 "id": 10, 00:30:35.701 "state": "FREE", 00:30:35.701 "validity": 0.0 00:30:35.701 }, 00:30:35.701 { 00:30:35.701 "id": 11, 00:30:35.701 "state": "FREE", 00:30:35.701 "validity": 0.0 00:30:35.701 }, 00:30:35.701 { 00:30:35.701 "id": 12, 00:30:35.701 "state": "FREE", 00:30:35.701 "validity": 0.0 00:30:35.701 }, 00:30:35.701 { 00:30:35.701 "id": 13, 00:30:35.701 "state": "FREE", 00:30:35.701 "validity": 0.0 00:30:35.701 }, 00:30:35.701 { 00:30:35.701 "id": 14, 00:30:35.701 "state": "FREE", 00:30:35.701 "validity": 0.0 00:30:35.701 }, 00:30:35.701 { 00:30:35.701 "id": 15, 00:30:35.701 "state": "FREE", 00:30:35.701 "validity": 0.0 00:30:35.701 }, 00:30:35.701 { 00:30:35.701 "id": 16, 00:30:35.701 "state": "FREE", 00:30:35.701 "validity": 0.0 00:30:35.701 }, 00:30:35.701 { 00:30:35.701 "id": 17, 00:30:35.701 "state": "FREE", 00:30:35.701 "validity": 0.0 00:30:35.701 } 00:30:35.701 ], 00:30:35.701 "read-only": true 00:30:35.701 }, 00:30:35.701 { 00:30:35.701 "name": "cache_device", 00:30:35.701 "type": "bdev", 00:30:35.701 "chunks": [ 00:30:35.701 { 00:30:35.701 "id": 0, 00:30:35.701 "state": "INACTIVE", 00:30:35.701 "utilization": 0.0 00:30:35.701 }, 00:30:35.701 { 00:30:35.701 "id": 1, 00:30:35.701 "state": "CLOSED", 00:30:35.701 "utilization": 1.0 00:30:35.701 }, 00:30:35.701 { 00:30:35.701 "id": 2, 00:30:35.701 "state": "CLOSED", 00:30:35.701 "utilization": 1.0 00:30:35.701 }, 00:30:35.701 { 00:30:35.701 "id": 3, 00:30:35.701 "state": "OPEN", 00:30:35.701 "utilization": 0.001953125 00:30:35.701 }, 00:30:35.701 { 00:30:35.701 "id": 4, 00:30:35.701 "state": "OPEN", 00:30:35.701 "utilization": 0.0 00:30:35.701 } 00:30:35.701 ], 00:30:35.701 "read-only": true 00:30:35.701 }, 00:30:35.701 { 00:30:35.701 "name": "verbose_mode", 00:30:35.701 "value": true, 00:30:35.701 "unit": "", 00:30:35.701 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:35.701 }, 00:30:35.701 { 00:30:35.701 "name": "prep_upgrade_on_shutdown", 00:30:35.701 "value": true, 00:30:35.701 "unit": "", 00:30:35.701 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:35.701 } 00:30:35.701 ] 00:30:35.701 } 00:30:35.701 01:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:35.701 01:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82880 ]] 00:30:35.701 01:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82880 00:30:35.701 01:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82880 ']' 00:30:35.701 01:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82880 00:30:35.701 01:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:35.701 01:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:35.701 01:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82880 00:30:35.701 killing process with pid 82880 00:30:35.701 01:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:35.701 01:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:35.701 01:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82880' 00:30:35.701 01:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82880 00:30:35.701 01:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82880 00:30:36.269 [2024-11-21 01:56:20.050421] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:36.269 [2024-11-21 01:56:20.060987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.269 [2024-11-21 01:56:20.061023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:36.269 [2024-11-21 01:56:20.061035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:36.269 [2024-11-21 01:56:20.061042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.269 [2024-11-21 01:56:20.061061] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:36.269 [2024-11-21 01:56:20.063307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.269 [2024-11-21 01:56:20.063331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:36.269 [2024-11-21 01:56:20.063340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.234 ms 00:30:36.269 [2024-11-21 01:56:20.063347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.413 [2024-11-21 01:56:27.605989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.413 [2024-11-21 01:56:27.606044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:44.413 [2024-11-21 01:56:27.606056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7542.589 ms 00:30:44.413 [2024-11-21 01:56:27.606145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.413 [2024-11-21 01:56:27.607477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.413 [2024-11-21 01:56:27.607502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:44.413 [2024-11-21 01:56:27.607509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.320 ms 00:30:44.413 [2024-11-21 01:56:27.607516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.413 [2024-11-21 01:56:27.608412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.413 [2024-11-21 01:56:27.608429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:44.413 [2024-11-21 01:56:27.608437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.875 ms 00:30:44.413 [2024-11-21 01:56:27.608443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.413 [2024-11-21 01:56:27.616057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.413 [2024-11-21 01:56:27.616082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:44.413 [2024-11-21 01:56:27.616091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.582 ms 00:30:44.413 [2024-11-21 01:56:27.616097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.413 [2024-11-21 01:56:27.620894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.413 [2024-11-21 01:56:27.620920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:44.413 [2024-11-21 01:56:27.620928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.771 ms 00:30:44.413 [2024-11-21 01:56:27.620935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.413 [2024-11-21 01:56:27.620990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.413 [2024-11-21 01:56:27.620997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:44.413 [2024-11-21 01:56:27.621004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:30:44.413 [2024-11-21 01:56:27.621014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.413 [2024-11-21 01:56:27.628244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.413 [2024-11-21 01:56:27.628277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:44.413 [2024-11-21 01:56:27.628284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.218 ms 00:30:44.413 [2024-11-21 01:56:27.628289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.413 [2024-11-21 01:56:27.635528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.413 [2024-11-21 01:56:27.635657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:44.413 [2024-11-21 01:56:27.635669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.216 ms 00:30:44.413 [2024-11-21 01:56:27.635674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.413 [2024-11-21 01:56:27.642858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.413 [2024-11-21 01:56:27.642940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:44.413 [2024-11-21 01:56:27.642986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.160 ms 00:30:44.413 [2024-11-21 01:56:27.643004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.413 [2024-11-21 01:56:27.649942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.413 [2024-11-21 01:56:27.650029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:44.413 [2024-11-21 01:56:27.650075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.878 ms 00:30:44.413 [2024-11-21 01:56:27.650091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.413 [2024-11-21 01:56:27.650120] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:44.413 [2024-11-21 01:56:27.650140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:44.413 [2024-11-21 01:56:27.650197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:44.413 [2024-11-21 01:56:27.650227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:44.413 [2024-11-21 01:56:27.650249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:44.413 [2024-11-21 01:56:27.650296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:44.413 [2024-11-21 01:56:27.650319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:44.413 [2024-11-21 01:56:27.650341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:44.413 [2024-11-21 01:56:27.650467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:44.413 [2024-11-21 01:56:27.650489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:44.413 [2024-11-21 01:56:27.650512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:44.413 [2024-11-21 01:56:27.650533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:44.413 [2024-11-21 01:56:27.650554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:44.413 [2024-11-21 01:56:27.650576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:44.413 [2024-11-21 01:56:27.650597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:44.413 [2024-11-21 01:56:27.650665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:44.413 [2024-11-21 01:56:27.650691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:44.413 [2024-11-21 01:56:27.650713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:44.413 [2024-11-21 01:56:27.650735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:44.413 [2024-11-21 01:56:27.650758] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:44.413 [2024-11-21 01:56:27.650773] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 1d88891f-7bbe-4830-9ae6-f803e7acd276 00:30:44.413 [2024-11-21 01:56:27.650820] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:44.413 [2024-11-21 01:56:27.650836] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:44.413 [2024-11-21 01:56:27.650849] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:44.413 [2024-11-21 01:56:27.650864] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:44.413 [2024-11-21 01:56:27.650878] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:44.413 [2024-11-21 01:56:27.650893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:44.413 [2024-11-21 01:56:27.650910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:44.413 [2024-11-21 01:56:27.650923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:44.413 [2024-11-21 01:56:27.650937] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:44.413 [2024-11-21 01:56:27.650975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.413 [2024-11-21 01:56:27.650995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:44.413 [2024-11-21 01:56:27.651010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.855 ms 00:30:44.413 [2024-11-21 01:56:27.651023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.413 [2024-11-21 01:56:27.660350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.413 [2024-11-21 01:56:27.660432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:44.413 [2024-11-21 01:56:27.660471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.303 ms 00:30:44.413 [2024-11-21 01:56:27.660492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.413 [2024-11-21 01:56:27.660778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.413 [2024-11-21 01:56:27.660830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:44.413 [2024-11-21 01:56:27.660868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.262 ms 00:30:44.413 [2024-11-21 01:56:27.660884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.413 [2024-11-21 01:56:27.693598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:44.413 [2024-11-21 01:56:27.693701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:44.413 [2024-11-21 01:56:27.693741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:44.413 [2024-11-21 01:56:27.693764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.413 [2024-11-21 01:56:27.693795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:44.413 [2024-11-21 01:56:27.693811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:44.413 [2024-11-21 01:56:27.693826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:44.414 [2024-11-21 01:56:27.693840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.414 [2024-11-21 01:56:27.693934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:44.414 [2024-11-21 01:56:27.693956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:44.414 [2024-11-21 01:56:27.693971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:44.414 [2024-11-21 01:56:27.693985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.414 [2024-11-21 01:56:27.694010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:44.414 [2024-11-21 01:56:27.694075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:44.414 [2024-11-21 01:56:27.694090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:44.414 [2024-11-21 01:56:27.694104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.414 [2024-11-21 01:56:27.752167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:44.414 [2024-11-21 01:56:27.752277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:44.414 [2024-11-21 01:56:27.752338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:44.414 [2024-11-21 01:56:27.752360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.414 [2024-11-21 01:56:27.800809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:44.414 [2024-11-21 01:56:27.800928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:44.414 [2024-11-21 01:56:27.800967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:44.414 [2024-11-21 01:56:27.800990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.414 [2024-11-21 01:56:27.801061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:44.414 [2024-11-21 01:56:27.801118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:44.414 [2024-11-21 01:56:27.801137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:44.414 [2024-11-21 01:56:27.801152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.414 [2024-11-21 01:56:27.801215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:44.414 [2024-11-21 01:56:27.801239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:44.414 [2024-11-21 01:56:27.801275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:44.414 [2024-11-21 01:56:27.801292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.414 [2024-11-21 01:56:27.801370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:44.414 [2024-11-21 01:56:27.801411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:44.414 [2024-11-21 01:56:27.801429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:44.414 [2024-11-21 01:56:27.801464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.414 [2024-11-21 01:56:27.801503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:44.414 [2024-11-21 01:56:27.801524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:44.414 [2024-11-21 01:56:27.801562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:44.414 [2024-11-21 01:56:27.801578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.414 [2024-11-21 01:56:27.801624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:44.414 [2024-11-21 01:56:27.801734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:44.414 [2024-11-21 01:56:27.801751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:44.414 [2024-11-21 01:56:27.801766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.414 [2024-11-21 01:56:27.801808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:44.414 [2024-11-21 01:56:27.801830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:44.414 [2024-11-21 01:56:27.801845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:44.414 [2024-11-21 01:56:27.801859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.414 [2024-11-21 01:56:27.802003] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7740.976 ms, result 0 00:30:47.716 01:56:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:47.716 01:56:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:47.716 01:56:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:47.716 01:56:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:47.716 01:56:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:47.716 01:56:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83404 00:30:47.716 01:56:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:47.716 01:56:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:47.716 01:56:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83404 00:30:47.716 01:56:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83404 ']' 00:30:47.716 01:56:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:47.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:47.716 01:56:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:47.716 01:56:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:47.716 01:56:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:47.716 01:56:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:47.716 [2024-11-21 01:56:31.562399] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:47.716 [2024-11-21 01:56:31.562516] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83404 ] 00:30:47.977 [2024-11-21 01:56:31.719774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.977 [2024-11-21 01:56:31.798113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.549 [2024-11-21 01:56:32.363739] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:48.549 [2024-11-21 01:56:32.363790] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:48.812 [2024-11-21 01:56:32.511305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.812 [2024-11-21 01:56:32.511351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:48.812 [2024-11-21 01:56:32.511365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:48.812 [2024-11-21 01:56:32.511373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.812 [2024-11-21 01:56:32.511423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.812 [2024-11-21 01:56:32.511434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:48.812 [2024-11-21 01:56:32.511442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:30:48.812 [2024-11-21 01:56:32.511450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.812 [2024-11-21 01:56:32.511472] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:48.812 [2024-11-21 01:56:32.512226] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:48.812 [2024-11-21 01:56:32.512249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.812 [2024-11-21 01:56:32.512257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:48.812 [2024-11-21 01:56:32.512265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.785 ms 00:30:48.812 [2024-11-21 01:56:32.512273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.812 [2024-11-21 01:56:32.513438] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:48.812 [2024-11-21 01:56:32.526341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.812 [2024-11-21 01:56:32.526383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:48.812 [2024-11-21 01:56:32.526400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.904 ms 00:30:48.812 [2024-11-21 01:56:32.526407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.812 [2024-11-21 01:56:32.526467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.812 [2024-11-21 01:56:32.526476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:48.812 [2024-11-21 01:56:32.526484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:30:48.812 [2024-11-21 01:56:32.526491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.812 [2024-11-21 01:56:32.531699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.812 [2024-11-21 01:56:32.531731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:48.812 [2024-11-21 01:56:32.531740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.154 ms 00:30:48.812 [2024-11-21 01:56:32.531747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.812 [2024-11-21 01:56:32.531800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.812 [2024-11-21 01:56:32.531809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:48.812 [2024-11-21 01:56:32.531817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:30:48.812 [2024-11-21 01:56:32.531824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.812 [2024-11-21 01:56:32.531877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.812 [2024-11-21 01:56:32.531888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:48.812 [2024-11-21 01:56:32.531898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:48.812 [2024-11-21 01:56:32.531906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.812 [2024-11-21 01:56:32.531927] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:48.812 [2024-11-21 01:56:32.535216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.812 [2024-11-21 01:56:32.535245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:48.812 [2024-11-21 01:56:32.535254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.293 ms 00:30:48.812 [2024-11-21 01:56:32.535264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.812 [2024-11-21 01:56:32.535288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.812 [2024-11-21 01:56:32.535296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:48.812 [2024-11-21 01:56:32.535304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:48.812 [2024-11-21 01:56:32.535311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.812 [2024-11-21 01:56:32.535332] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:48.812 [2024-11-21 01:56:32.535350] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:48.812 [2024-11-21 01:56:32.535385] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:48.812 [2024-11-21 01:56:32.535399] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:48.812 [2024-11-21 01:56:32.535500] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:48.812 [2024-11-21 01:56:32.535510] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:48.812 [2024-11-21 01:56:32.535521] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:48.812 [2024-11-21 01:56:32.535530] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:48.812 [2024-11-21 01:56:32.535539] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:48.812 [2024-11-21 01:56:32.535549] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:48.812 [2024-11-21 01:56:32.535557] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:48.812 [2024-11-21 01:56:32.535564] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:48.812 [2024-11-21 01:56:32.535571] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:48.812 [2024-11-21 01:56:32.535578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.812 [2024-11-21 01:56:32.535585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:48.812 [2024-11-21 01:56:32.535593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.249 ms 00:30:48.812 [2024-11-21 01:56:32.535600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.812 [2024-11-21 01:56:32.535707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.812 [2024-11-21 01:56:32.535716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:48.813 [2024-11-21 01:56:32.535724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:30:48.813 [2024-11-21 01:56:32.535733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.813 [2024-11-21 01:56:32.535841] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:48.813 [2024-11-21 01:56:32.535857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:48.813 [2024-11-21 01:56:32.535865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:48.813 [2024-11-21 01:56:32.535873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:48.813 [2024-11-21 01:56:32.535880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:48.813 [2024-11-21 01:56:32.535892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:48.813 [2024-11-21 01:56:32.535900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:48.813 [2024-11-21 01:56:32.535907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:48.813 [2024-11-21 01:56:32.535918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:48.813 [2024-11-21 01:56:32.535925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:48.813 [2024-11-21 01:56:32.535935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:48.813 [2024-11-21 01:56:32.535947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:48.813 [2024-11-21 01:56:32.535957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:48.813 [2024-11-21 01:56:32.535963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:48.813 [2024-11-21 01:56:32.535975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:48.813 [2024-11-21 01:56:32.535981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:48.813 [2024-11-21 01:56:32.535988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:48.813 [2024-11-21 01:56:32.535997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:48.813 [2024-11-21 01:56:32.536004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:48.813 [2024-11-21 01:56:32.536011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:48.813 [2024-11-21 01:56:32.536018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:48.813 [2024-11-21 01:56:32.536024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:48.813 [2024-11-21 01:56:32.536030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:48.813 [2024-11-21 01:56:32.536037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:48.813 [2024-11-21 01:56:32.536044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:48.813 [2024-11-21 01:56:32.536055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:48.813 [2024-11-21 01:56:32.536062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:48.813 [2024-11-21 01:56:32.536068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:48.813 [2024-11-21 01:56:32.536075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:48.813 [2024-11-21 01:56:32.536081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:48.813 [2024-11-21 01:56:32.536087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:48.813 [2024-11-21 01:56:32.536094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:48.813 [2024-11-21 01:56:32.536100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:48.813 [2024-11-21 01:56:32.536106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:48.813 [2024-11-21 01:56:32.536113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:48.813 [2024-11-21 01:56:32.536120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:48.813 [2024-11-21 01:56:32.536126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:48.813 [2024-11-21 01:56:32.536133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:48.813 [2024-11-21 01:56:32.536139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:48.813 [2024-11-21 01:56:32.536145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:48.813 [2024-11-21 01:56:32.536152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:48.813 [2024-11-21 01:56:32.536158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:48.813 [2024-11-21 01:56:32.536164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:48.813 [2024-11-21 01:56:32.536171] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:48.813 [2024-11-21 01:56:32.536180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:48.813 [2024-11-21 01:56:32.536187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:48.813 [2024-11-21 01:56:32.536194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:48.813 [2024-11-21 01:56:32.536203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:48.813 [2024-11-21 01:56:32.536210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:48.813 [2024-11-21 01:56:32.536216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:48.813 [2024-11-21 01:56:32.536223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:48.813 [2024-11-21 01:56:32.536230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:48.813 [2024-11-21 01:56:32.536236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:48.813 [2024-11-21 01:56:32.536248] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:48.813 [2024-11-21 01:56:32.536257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:48.813 [2024-11-21 01:56:32.536265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:48.813 [2024-11-21 01:56:32.536272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:48.813 [2024-11-21 01:56:32.536279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:48.813 [2024-11-21 01:56:32.536290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:48.813 [2024-11-21 01:56:32.536297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:48.813 [2024-11-21 01:56:32.536304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:48.813 [2024-11-21 01:56:32.536311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:48.813 [2024-11-21 01:56:32.536318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:48.813 [2024-11-21 01:56:32.536325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:48.813 [2024-11-21 01:56:32.536332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:48.813 [2024-11-21 01:56:32.536338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:48.813 [2024-11-21 01:56:32.536345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:48.813 [2024-11-21 01:56:32.536352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:48.813 [2024-11-21 01:56:32.536359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:48.813 [2024-11-21 01:56:32.536365] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:48.813 [2024-11-21 01:56:32.536373] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:48.813 [2024-11-21 01:56:32.536381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:48.813 [2024-11-21 01:56:32.536388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:48.813 [2024-11-21 01:56:32.536395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:48.813 [2024-11-21 01:56:32.536402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:48.813 [2024-11-21 01:56:32.536409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:48.813 [2024-11-21 01:56:32.536417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:48.813 [2024-11-21 01:56:32.536424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.644 ms 00:30:48.813 [2024-11-21 01:56:32.536431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:48.813 [2024-11-21 01:56:32.536483] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:48.813 [2024-11-21 01:56:32.536493] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:53.025 [2024-11-21 01:56:36.452496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.452867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:53.026 [2024-11-21 01:56:36.452958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3915.996 ms 00:30:53.026 [2024-11-21 01:56:36.452986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.484503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.484758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:53.026 [2024-11-21 01:56:36.484858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.151 ms 00:30:53.026 [2024-11-21 01:56:36.484890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.485003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.485040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:53.026 [2024-11-21 01:56:36.485061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:53.026 [2024-11-21 01:56:36.485148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.520387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.520581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:53.026 [2024-11-21 01:56:36.520705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.166 ms 00:30:53.026 [2024-11-21 01:56:36.520740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.520799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.520823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:53.026 [2024-11-21 01:56:36.520844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:53.026 [2024-11-21 01:56:36.520916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.521521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.521694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:53.026 [2024-11-21 01:56:36.521768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.511 ms 00:30:53.026 [2024-11-21 01:56:36.521792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.521867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.522028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:53.026 [2024-11-21 01:56:36.522055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:30:53.026 [2024-11-21 01:56:36.522074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.539737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.539903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:53.026 [2024-11-21 01:56:36.539964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.625 ms 00:30:53.026 [2024-11-21 01:56:36.539987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.554242] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:53.026 [2024-11-21 01:56:36.554465] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:53.026 [2024-11-21 01:56:36.554538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.554560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:53.026 [2024-11-21 01:56:36.554582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.357 ms 00:30:53.026 [2024-11-21 01:56:36.554600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.569522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.569704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:53.026 [2024-11-21 01:56:36.569773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.812 ms 00:30:53.026 [2024-11-21 01:56:36.569796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.582336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.582514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:53.026 [2024-11-21 01:56:36.582570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.481 ms 00:30:53.026 [2024-11-21 01:56:36.582592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.595256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.595412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:53.026 [2024-11-21 01:56:36.595472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.542 ms 00:30:53.026 [2024-11-21 01:56:36.595493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.596214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.596351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:53.026 [2024-11-21 01:56:36.596409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.551 ms 00:30:53.026 [2024-11-21 01:56:36.596431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.669398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.669699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:53.026 [2024-11-21 01:56:36.669918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 72.925 ms 00:30:53.026 [2024-11-21 01:56:36.669947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.681041] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:53.026 [2024-11-21 01:56:36.682187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.682332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:53.026 [2024-11-21 01:56:36.682350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.173 ms 00:30:53.026 [2024-11-21 01:56:36.682360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.682482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.682499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:53.026 [2024-11-21 01:56:36.682508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:53.026 [2024-11-21 01:56:36.682517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.682578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.682590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:53.026 [2024-11-21 01:56:36.682600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:53.026 [2024-11-21 01:56:36.682608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.682660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.682670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:53.026 [2024-11-21 01:56:36.682680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:53.026 [2024-11-21 01:56:36.682691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.682730] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:53.026 [2024-11-21 01:56:36.682740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.682749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:53.026 [2024-11-21 01:56:36.682759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:53.026 [2024-11-21 01:56:36.682767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.708184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.708244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:53.026 [2024-11-21 01:56:36.708257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.396 ms 00:30:53.026 [2024-11-21 01:56:36.708265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.708361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.026 [2024-11-21 01:56:36.708372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:53.026 [2024-11-21 01:56:36.708383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:30:53.026 [2024-11-21 01:56:36.708391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.026 [2024-11-21 01:56:36.709735] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4197.874 ms, result 0 00:30:53.026 [2024-11-21 01:56:36.724606] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:53.026 [2024-11-21 01:56:36.740640] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:53.026 [2024-11-21 01:56:36.748807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:53.026 01:56:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:53.026 01:56:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:53.026 01:56:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:53.026 01:56:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:53.026 01:56:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:53.288 [2024-11-21 01:56:37.004385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.288 [2024-11-21 01:56:37.004442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:53.288 [2024-11-21 01:56:37.004458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:53.288 [2024-11-21 01:56:37.004471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.288 [2024-11-21 01:56:37.004498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.288 [2024-11-21 01:56:37.004508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:53.288 [2024-11-21 01:56:37.004518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:53.289 [2024-11-21 01:56:37.004527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.289 [2024-11-21 01:56:37.004548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.289 [2024-11-21 01:56:37.004558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:53.289 [2024-11-21 01:56:37.004566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:53.289 [2024-11-21 01:56:37.004574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.289 [2024-11-21 01:56:37.004661] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.246 ms, result 0 00:30:53.289 true 00:30:53.289 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:53.289 { 00:30:53.289 "name": "ftl", 00:30:53.289 "properties": [ 00:30:53.289 { 00:30:53.289 "name": "superblock_version", 00:30:53.289 "value": 5, 00:30:53.289 "read-only": true 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "name": "base_device", 00:30:53.289 "bands": [ 00:30:53.289 { 00:30:53.289 "id": 0, 00:30:53.289 "state": "CLOSED", 00:30:53.289 "validity": 1.0 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 1, 00:30:53.289 "state": "CLOSED", 00:30:53.289 "validity": 1.0 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 2, 00:30:53.289 "state": "CLOSED", 00:30:53.289 "validity": 0.007843137254901933 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 3, 00:30:53.289 "state": "FREE", 00:30:53.289 "validity": 0.0 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 4, 00:30:53.289 "state": "FREE", 00:30:53.289 "validity": 0.0 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 5, 00:30:53.289 "state": "FREE", 00:30:53.289 "validity": 0.0 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 6, 00:30:53.289 "state": "FREE", 00:30:53.289 "validity": 0.0 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 7, 00:30:53.289 "state": "FREE", 00:30:53.289 "validity": 0.0 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 8, 00:30:53.289 "state": "FREE", 00:30:53.289 "validity": 0.0 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 9, 00:30:53.289 "state": "FREE", 00:30:53.289 "validity": 0.0 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 10, 00:30:53.289 "state": "FREE", 00:30:53.289 "validity": 0.0 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 11, 00:30:53.289 "state": "FREE", 00:30:53.289 "validity": 0.0 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 12, 00:30:53.289 "state": "FREE", 00:30:53.289 "validity": 0.0 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 13, 00:30:53.289 "state": "FREE", 00:30:53.289 "validity": 0.0 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 14, 00:30:53.289 "state": "FREE", 00:30:53.289 "validity": 0.0 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 15, 00:30:53.289 "state": "FREE", 00:30:53.289 "validity": 0.0 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 16, 00:30:53.289 "state": "FREE", 00:30:53.289 "validity": 0.0 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 17, 00:30:53.289 "state": "FREE", 00:30:53.289 "validity": 0.0 00:30:53.289 } 00:30:53.289 ], 00:30:53.289 "read-only": true 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "name": "cache_device", 00:30:53.289 "type": "bdev", 00:30:53.289 "chunks": [ 00:30:53.289 { 00:30:53.289 "id": 0, 00:30:53.289 "state": "INACTIVE", 00:30:53.289 "utilization": 0.0 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 1, 00:30:53.289 "state": "OPEN", 00:30:53.289 "utilization": 0.0 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 2, 00:30:53.289 "state": "OPEN", 00:30:53.289 "utilization": 0.0 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 3, 00:30:53.289 "state": "FREE", 00:30:53.289 "utilization": 0.0 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "id": 4, 00:30:53.289 "state": "FREE", 00:30:53.289 "utilization": 0.0 00:30:53.289 } 00:30:53.289 ], 00:30:53.289 "read-only": true 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "name": "verbose_mode", 00:30:53.289 "value": true, 00:30:53.289 "unit": "", 00:30:53.289 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:53.289 }, 00:30:53.289 { 00:30:53.289 "name": "prep_upgrade_on_shutdown", 00:30:53.289 "value": false, 00:30:53.289 "unit": "", 00:30:53.289 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:53.289 } 00:30:53.289 ] 00:30:53.289 } 00:30:53.289 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:53.550 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:53.550 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:53.550 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:53.550 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:53.550 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:53.550 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:53.550 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:53.812 Validate MD5 checksum, iteration 1 00:30:53.812 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:53.812 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:53.812 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:53.812 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:53.812 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:53.812 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:53.812 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:53.812 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:53.812 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:53.812 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:53.812 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:53.812 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:53.812 01:56:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:53.812 [2024-11-21 01:56:37.733527] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:53.812 [2024-11-21 01:56:37.733789] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83484 ] 00:30:54.073 [2024-11-21 01:56:37.892794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.073 [2024-11-21 01:56:37.984231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.988  [2024-11-21T01:56:40.551Z] Copying: 586/1024 [MB] (586 MBps) [2024-11-21T01:56:41.937Z] Copying: 1024/1024 [MB] (average 569 MBps) 00:30:57.980 00:30:57.980 01:56:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:57.980 01:56:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:59.890 01:56:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:59.890 01:56:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4533b5e0d37e9897e0dd1d5ed5c64a24 00:30:59.890 01:56:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4533b5e0d37e9897e0dd1d5ed5c64a24 != \4\5\3\3\b\5\e\0\d\3\7\e\9\8\9\7\e\0\d\d\1\d\5\e\d\5\c\6\4\a\2\4 ]] 00:30:59.890 01:56:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:59.890 01:56:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:59.890 01:56:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:59.890 Validate MD5 checksum, iteration 2 00:30:59.890 01:56:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:59.890 01:56:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:59.890 01:56:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:59.890 01:56:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:59.890 01:56:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:59.890 01:56:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:59.890 [2024-11-21 01:56:43.706301] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:30:59.890 [2024-11-21 01:56:43.706502] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83552 ] 00:31:00.152 [2024-11-21 01:56:43.860890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.152 [2024-11-21 01:56:43.961145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.067  [2024-11-21T01:56:46.594Z] Copying: 505/1024 [MB] (505 MBps) [2024-11-21T01:56:47.529Z] Copying: 1024/1024 [MB] (average 537 MBps) 00:31:03.572 00:31:03.572 01:56:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:03.572 01:56:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:05.473 01:56:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b193e250d738028cf085de6fff73e734 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b193e250d738028cf085de6fff73e734 != \b\1\9\3\e\2\5\0\d\7\3\8\0\2\8\c\f\0\8\5\d\e\6\f\f\f\7\3\e\7\3\4 ]] 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83404 ]] 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83404 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83619 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83619 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83619 ']' 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:05.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:05.732 01:56:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:05.732 [2024-11-21 01:56:49.507733] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:31:05.732 [2024-11-21 01:56:49.507859] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83619 ] 00:31:05.732 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83404 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:05.732 [2024-11-21 01:56:49.663977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.991 [2024-11-21 01:56:49.774657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.558 [2024-11-21 01:56:50.399500] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:06.558 [2024-11-21 01:56:50.399558] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:06.818 [2024-11-21 01:56:50.548358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.818 [2024-11-21 01:56:50.548393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:06.818 [2024-11-21 01:56:50.548404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:06.818 [2024-11-21 01:56:50.548412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.818 [2024-11-21 01:56:50.548456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.818 [2024-11-21 01:56:50.548465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:06.818 [2024-11-21 01:56:50.548472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:31:06.818 [2024-11-21 01:56:50.548480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.818 [2024-11-21 01:56:50.548499] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:06.818 [2024-11-21 01:56:50.549033] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:06.818 [2024-11-21 01:56:50.549052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.818 [2024-11-21 01:56:50.549059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:06.818 [2024-11-21 01:56:50.549066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.560 ms 00:31:06.818 [2024-11-21 01:56:50.549073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.818 [2024-11-21 01:56:50.549297] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:06.818 [2024-11-21 01:56:50.563674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.818 [2024-11-21 01:56:50.563707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:06.818 [2024-11-21 01:56:50.563717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.378 ms 00:31:06.818 [2024-11-21 01:56:50.563724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.818 [2024-11-21 01:56:50.570789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.818 [2024-11-21 01:56:50.570815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:06.818 [2024-11-21 01:56:50.570825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:31:06.818 [2024-11-21 01:56:50.570832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.818 [2024-11-21 01:56:50.571084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.819 [2024-11-21 01:56:50.571094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:06.819 [2024-11-21 01:56:50.571101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.192 ms 00:31:06.819 [2024-11-21 01:56:50.571107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.819 [2024-11-21 01:56:50.571146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.819 [2024-11-21 01:56:50.571156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:06.819 [2024-11-21 01:56:50.571163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:31:06.819 [2024-11-21 01:56:50.571169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.819 [2024-11-21 01:56:50.571189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.819 [2024-11-21 01:56:50.571196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:06.819 [2024-11-21 01:56:50.571203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:06.819 [2024-11-21 01:56:50.571208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.819 [2024-11-21 01:56:50.571224] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:06.819 [2024-11-21 01:56:50.573576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.819 [2024-11-21 01:56:50.573599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:06.819 [2024-11-21 01:56:50.573607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.356 ms 00:31:06.819 [2024-11-21 01:56:50.573624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.819 [2024-11-21 01:56:50.573650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.819 [2024-11-21 01:56:50.573657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:06.819 [2024-11-21 01:56:50.573664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:06.819 [2024-11-21 01:56:50.573669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.819 [2024-11-21 01:56:50.573686] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:06.819 [2024-11-21 01:56:50.573701] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:06.819 [2024-11-21 01:56:50.573729] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:06.819 [2024-11-21 01:56:50.573744] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:06.819 [2024-11-21 01:56:50.573827] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:06.819 [2024-11-21 01:56:50.573835] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:06.819 [2024-11-21 01:56:50.573843] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:06.819 [2024-11-21 01:56:50.573851] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:06.819 [2024-11-21 01:56:50.573858] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:06.819 [2024-11-21 01:56:50.573865] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:06.819 [2024-11-21 01:56:50.573871] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:06.819 [2024-11-21 01:56:50.573877] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:06.819 [2024-11-21 01:56:50.573883] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:06.819 [2024-11-21 01:56:50.573889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.819 [2024-11-21 01:56:50.573897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:06.819 [2024-11-21 01:56:50.573904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.206 ms 00:31:06.819 [2024-11-21 01:56:50.573910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.819 [2024-11-21 01:56:50.573974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.819 [2024-11-21 01:56:50.573981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:06.819 [2024-11-21 01:56:50.573987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:31:06.819 [2024-11-21 01:56:50.573992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.819 [2024-11-21 01:56:50.574067] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:06.819 [2024-11-21 01:56:50.574075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:06.819 [2024-11-21 01:56:50.574084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:06.819 [2024-11-21 01:56:50.574090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.819 [2024-11-21 01:56:50.574096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:06.819 [2024-11-21 01:56:50.574102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:06.819 [2024-11-21 01:56:50.574108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:06.819 [2024-11-21 01:56:50.574113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:06.819 [2024-11-21 01:56:50.574120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:06.819 [2024-11-21 01:56:50.574125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.819 [2024-11-21 01:56:50.574132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:06.819 [2024-11-21 01:56:50.574138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:06.819 [2024-11-21 01:56:50.574143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.819 [2024-11-21 01:56:50.574148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:06.819 [2024-11-21 01:56:50.574153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:06.819 [2024-11-21 01:56:50.574159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.819 [2024-11-21 01:56:50.574164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:06.819 [2024-11-21 01:56:50.574169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:06.819 [2024-11-21 01:56:50.574174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.819 [2024-11-21 01:56:50.574179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:06.819 [2024-11-21 01:56:50.574184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:06.819 [2024-11-21 01:56:50.574189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:06.819 [2024-11-21 01:56:50.574194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:06.819 [2024-11-21 01:56:50.574203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:06.819 [2024-11-21 01:56:50.574208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:06.819 [2024-11-21 01:56:50.574213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:06.819 [2024-11-21 01:56:50.574220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:06.819 [2024-11-21 01:56:50.574225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:06.819 [2024-11-21 01:56:50.574230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:06.819 [2024-11-21 01:56:50.574235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:06.819 [2024-11-21 01:56:50.574240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:06.819 [2024-11-21 01:56:50.574245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:06.819 [2024-11-21 01:56:50.574250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:06.819 [2024-11-21 01:56:50.574254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.819 [2024-11-21 01:56:50.574260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:06.819 [2024-11-21 01:56:50.574265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:06.819 [2024-11-21 01:56:50.574270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.819 [2024-11-21 01:56:50.574275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:06.819 [2024-11-21 01:56:50.574280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:06.819 [2024-11-21 01:56:50.574285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.819 [2024-11-21 01:56:50.574290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:06.819 [2024-11-21 01:56:50.574295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:06.819 [2024-11-21 01:56:50.574300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.819 [2024-11-21 01:56:50.574304] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:06.819 [2024-11-21 01:56:50.574311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:06.819 [2024-11-21 01:56:50.574317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:06.819 [2024-11-21 01:56:50.574323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:06.819 [2024-11-21 01:56:50.574328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:06.819 [2024-11-21 01:56:50.574334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:06.819 [2024-11-21 01:56:50.574339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:06.819 [2024-11-21 01:56:50.574344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:06.819 [2024-11-21 01:56:50.574349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:06.819 [2024-11-21 01:56:50.574354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:06.819 [2024-11-21 01:56:50.574360] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:06.819 [2024-11-21 01:56:50.574367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:06.819 [2024-11-21 01:56:50.574374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:06.819 [2024-11-21 01:56:50.574379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:06.819 [2024-11-21 01:56:50.574385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:06.819 [2024-11-21 01:56:50.574400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:06.819 [2024-11-21 01:56:50.574405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:06.819 [2024-11-21 01:56:50.574411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:06.819 [2024-11-21 01:56:50.574416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:06.820 [2024-11-21 01:56:50.574422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:06.820 [2024-11-21 01:56:50.574427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:06.820 [2024-11-21 01:56:50.574432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:06.820 [2024-11-21 01:56:50.574438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:06.820 [2024-11-21 01:56:50.574444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:06.820 [2024-11-21 01:56:50.574449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:06.820 [2024-11-21 01:56:50.574455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:06.820 [2024-11-21 01:56:50.574460] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:06.820 [2024-11-21 01:56:50.574466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:06.820 [2024-11-21 01:56:50.574472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:06.820 [2024-11-21 01:56:50.574479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:06.820 [2024-11-21 01:56:50.574485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:06.820 [2024-11-21 01:56:50.574491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:06.820 [2024-11-21 01:56:50.574497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.820 [2024-11-21 01:56:50.574505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:06.820 [2024-11-21 01:56:50.574511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.483 ms 00:31:06.820 [2024-11-21 01:56:50.574517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.820 [2024-11-21 01:56:50.596550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.820 [2024-11-21 01:56:50.596584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:06.820 [2024-11-21 01:56:50.596595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.995 ms 00:31:06.820 [2024-11-21 01:56:50.596601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.820 [2024-11-21 01:56:50.596661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.820 [2024-11-21 01:56:50.596670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:06.820 [2024-11-21 01:56:50.596677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:31:06.820 [2024-11-21 01:56:50.596683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.820 [2024-11-21 01:56:50.623113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.820 [2024-11-21 01:56:50.623142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:06.820 [2024-11-21 01:56:50.623150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.381 ms 00:31:06.820 [2024-11-21 01:56:50.623157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.820 [2024-11-21 01:56:50.623183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.820 [2024-11-21 01:56:50.623190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:06.820 [2024-11-21 01:56:50.623196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:06.820 [2024-11-21 01:56:50.623202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.820 [2024-11-21 01:56:50.623280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.820 [2024-11-21 01:56:50.623289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:06.820 [2024-11-21 01:56:50.623295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:31:06.820 [2024-11-21 01:56:50.623302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.820 [2024-11-21 01:56:50.623334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.820 [2024-11-21 01:56:50.623342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:06.820 [2024-11-21 01:56:50.623348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:31:06.820 [2024-11-21 01:56:50.623354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.820 [2024-11-21 01:56:50.636742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.820 [2024-11-21 01:56:50.636768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:06.820 [2024-11-21 01:56:50.636777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.371 ms 00:31:06.820 [2024-11-21 01:56:50.636784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.820 [2024-11-21 01:56:50.636866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.820 [2024-11-21 01:56:50.636875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:06.820 [2024-11-21 01:56:50.636882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:06.820 [2024-11-21 01:56:50.636888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.820 [2024-11-21 01:56:50.658785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.820 [2024-11-21 01:56:50.658826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:06.820 [2024-11-21 01:56:50.658840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.881 ms 00:31:06.820 [2024-11-21 01:56:50.658849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.820 [2024-11-21 01:56:50.667382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.820 [2024-11-21 01:56:50.667413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:06.820 [2024-11-21 01:56:50.667426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.408 ms 00:31:06.820 [2024-11-21 01:56:50.667433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.820 [2024-11-21 01:56:50.714544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.820 [2024-11-21 01:56:50.714714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:06.820 [2024-11-21 01:56:50.714732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.066 ms 00:31:06.820 [2024-11-21 01:56:50.714740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.820 [2024-11-21 01:56:50.714864] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:06.820 [2024-11-21 01:56:50.714967] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:06.820 [2024-11-21 01:56:50.715070] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:06.820 [2024-11-21 01:56:50.715171] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:06.820 [2024-11-21 01:56:50.715184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.820 [2024-11-21 01:56:50.715190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:06.820 [2024-11-21 01:56:50.715198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.413 ms 00:31:06.820 [2024-11-21 01:56:50.715204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.820 [2024-11-21 01:56:50.715248] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:06.820 [2024-11-21 01:56:50.715258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.820 [2024-11-21 01:56:50.715268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:06.820 [2024-11-21 01:56:50.715275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:31:06.820 [2024-11-21 01:56:50.715281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.820 [2024-11-21 01:56:50.727609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.820 [2024-11-21 01:56:50.727644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:06.820 [2024-11-21 01:56:50.727653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.310 ms 00:31:06.820 [2024-11-21 01:56:50.727660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.820 [2024-11-21 01:56:50.734082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.820 [2024-11-21 01:56:50.734186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:06.820 [2024-11-21 01:56:50.734198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:06.820 [2024-11-21 01:56:50.734205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:06.820 [2024-11-21 01:56:50.734274] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:06.820 [2024-11-21 01:56:50.734455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:06.820 [2024-11-21 01:56:50.734470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:06.820 [2024-11-21 01:56:50.734478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.181 ms 00:31:06.820 [2024-11-21 01:56:50.734484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.757 [2024-11-21 01:56:51.379287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.757 [2024-11-21 01:56:51.379476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:07.757 [2024-11-21 01:56:51.379541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 644.035 ms 00:31:07.757 [2024-11-21 01:56:51.379567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.757 [2024-11-21 01:56:51.383880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.757 [2024-11-21 01:56:51.383993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:07.757 [2024-11-21 01:56:51.384047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.383 ms 00:31:07.757 [2024-11-21 01:56:51.384059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.757 [2024-11-21 01:56:51.384987] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:07.757 [2024-11-21 01:56:51.385016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.757 [2024-11-21 01:56:51.385024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:07.757 [2024-11-21 01:56:51.385034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.923 ms 00:31:07.757 [2024-11-21 01:56:51.385042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.757 [2024-11-21 01:56:51.385244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.757 [2024-11-21 01:56:51.385274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:07.757 [2024-11-21 01:56:51.385285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:07.757 [2024-11-21 01:56:51.385293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:07.757 [2024-11-21 01:56:51.385364] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 651.080 ms, result 0 00:31:07.757 [2024-11-21 01:56:51.385405] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:07.757 [2024-11-21 01:56:51.385582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:07.757 [2024-11-21 01:56:51.385593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:07.757 [2024-11-21 01:56:51.385602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.178 ms 00:31:07.757 [2024-11-21 01:56:51.385610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.328 [2024-11-21 01:56:52.090437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.328 [2024-11-21 01:56:52.090488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:08.328 [2024-11-21 01:56:52.090500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 703.827 ms 00:31:08.328 [2024-11-21 01:56:52.090507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.328 [2024-11-21 01:56:52.094451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.328 [2024-11-21 01:56:52.094638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:08.328 [2024-11-21 01:56:52.094657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.372 ms 00:31:08.328 [2024-11-21 01:56:52.094666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.328 [2024-11-21 01:56:52.095457] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:08.328 [2024-11-21 01:56:52.095505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.328 [2024-11-21 01:56:52.095514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:08.328 [2024-11-21 01:56:52.095524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.602 ms 00:31:08.328 [2024-11-21 01:56:52.095532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.328 [2024-11-21 01:56:52.095566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.328 [2024-11-21 01:56:52.095575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:08.328 [2024-11-21 01:56:52.095582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:08.328 [2024-11-21 01:56:52.095589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.328 [2024-11-21 01:56:52.095640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 710.211 ms, result 0 00:31:08.328 [2024-11-21 01:56:52.095685] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:08.328 [2024-11-21 01:56:52.095696] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:08.328 [2024-11-21 01:56:52.095706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.328 [2024-11-21 01:56:52.095715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:08.328 [2024-11-21 01:56:52.095722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1361.443 ms 00:31:08.328 [2024-11-21 01:56:52.095729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.328 [2024-11-21 01:56:52.095754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.328 [2024-11-21 01:56:52.095764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:08.328 [2024-11-21 01:56:52.095775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:08.328 [2024-11-21 01:56:52.095782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.328 [2024-11-21 01:56:52.107042] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:08.328 [2024-11-21 01:56:52.107161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.328 [2024-11-21 01:56:52.107172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:08.328 [2024-11-21 01:56:52.107181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.366 ms 00:31:08.328 [2024-11-21 01:56:52.107188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.328 [2024-11-21 01:56:52.107799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.328 [2024-11-21 01:56:52.107814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:08.328 [2024-11-21 01:56:52.107826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.541 ms 00:31:08.328 [2024-11-21 01:56:52.107832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.328 [2024-11-21 01:56:52.109521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.328 [2024-11-21 01:56:52.109543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:08.328 [2024-11-21 01:56:52.109551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.675 ms 00:31:08.328 [2024-11-21 01:56:52.109558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.328 [2024-11-21 01:56:52.109590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.329 [2024-11-21 01:56:52.109599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:08.329 [2024-11-21 01:56:52.109606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:08.329 [2024-11-21 01:56:52.109631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.329 [2024-11-21 01:56:52.109725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.329 [2024-11-21 01:56:52.109736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:08.329 [2024-11-21 01:56:52.109743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:31:08.329 [2024-11-21 01:56:52.109749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.329 [2024-11-21 01:56:52.109768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.329 [2024-11-21 01:56:52.109775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:08.329 [2024-11-21 01:56:52.109782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:08.329 [2024-11-21 01:56:52.109788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.329 [2024-11-21 01:56:52.109817] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:08.329 [2024-11-21 01:56:52.109827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.329 [2024-11-21 01:56:52.109833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:08.329 [2024-11-21 01:56:52.109839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:31:08.329 [2024-11-21 01:56:52.109846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.329 [2024-11-21 01:56:52.109894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.329 [2024-11-21 01:56:52.109909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:08.329 [2024-11-21 01:56:52.109917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:31:08.329 [2024-11-21 01:56:52.109926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.329 [2024-11-21 01:56:52.111393] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1562.545 ms, result 0 00:31:08.329 [2024-11-21 01:56:52.123753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.329 [2024-11-21 01:56:52.139728] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:08.329 [2024-11-21 01:56:52.147881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:08.329 Validate MD5 checksum, iteration 1 00:31:08.329 01:56:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:08.329 01:56:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:08.329 01:56:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:08.329 01:56:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:08.329 01:56:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:08.329 01:56:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:08.329 01:56:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:08.329 01:56:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:08.329 01:56:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:08.329 01:56:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:08.329 01:56:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:08.329 01:56:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:08.329 01:56:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:08.329 01:56:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:08.329 01:56:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:08.329 [2024-11-21 01:56:52.249332] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:31:08.329 [2024-11-21 01:56:52.249577] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83648 ] 00:31:08.587 [2024-11-21 01:56:52.400923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.587 [2024-11-21 01:56:52.475031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:09.968  [2024-11-21T01:56:54.498Z] Copying: 673/1024 [MB] (673 MBps) [2024-11-21T01:56:55.437Z] Copying: 1024/1024 [MB] (average 654 MBps) 00:31:11.480 00:31:11.480 01:56:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:11.480 01:56:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:14.012 01:56:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:14.012 01:56:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4533b5e0d37e9897e0dd1d5ed5c64a24 00:31:14.012 01:56:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4533b5e0d37e9897e0dd1d5ed5c64a24 != \4\5\3\3\b\5\e\0\d\3\7\e\9\8\9\7\e\0\d\d\1\d\5\e\d\5\c\6\4\a\2\4 ]] 00:31:14.012 01:56:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:14.012 01:56:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:14.012 01:56:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:14.012 Validate MD5 checksum, iteration 2 00:31:14.012 01:56:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:14.012 01:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:14.012 01:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:14.012 01:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:14.012 01:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:14.012 01:56:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:14.012 [2024-11-21 01:56:57.512782] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:31:14.012 [2024-11-21 01:56:57.513014] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83705 ] 00:31:14.012 [2024-11-21 01:56:57.668416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.012 [2024-11-21 01:56:57.745034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.395  [2024-11-21T01:56:59.924Z] Copying: 662/1024 [MB] (662 MBps) [2024-11-21T01:57:04.110Z] Copying: 1024/1024 [MB] (average 648 MBps) 00:31:20.153 00:31:20.153 01:57:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:20.153 01:57:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:22.683 01:57:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:22.683 01:57:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b193e250d738028cf085de6fff73e734 00:31:22.683 01:57:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b193e250d738028cf085de6fff73e734 != \b\1\9\3\e\2\5\0\d\7\3\8\0\2\8\c\f\0\8\5\d\e\6\f\f\f\7\3\e\7\3\4 ]] 00:31:22.683 01:57:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:22.683 01:57:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:22.683 01:57:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:22.683 01:57:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:22.683 01:57:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:22.684 01:57:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:22.684 01:57:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:22.684 01:57:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:22.684 01:57:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:22.684 01:57:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:22.684 01:57:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83619 ]] 00:31:22.684 01:57:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83619 00:31:22.684 01:57:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83619 ']' 00:31:22.684 01:57:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83619 00:31:22.684 01:57:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:22.684 01:57:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:22.684 01:57:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83619 00:31:22.684 killing process with pid 83619 00:31:22.684 01:57:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:22.684 01:57:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:22.684 01:57:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83619' 00:31:22.684 01:57:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83619 00:31:22.684 01:57:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83619 00:31:22.943 [2024-11-21 01:57:06.773149] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:22.943 [2024-11-21 01:57:06.784938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.943 [2024-11-21 01:57:06.784975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:22.943 [2024-11-21 01:57:06.784986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:22.943 [2024-11-21 01:57:06.784993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.943 [2024-11-21 01:57:06.785011] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:22.943 [2024-11-21 01:57:06.787305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.943 [2024-11-21 01:57:06.787331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:22.943 [2024-11-21 01:57:06.787340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.282 ms 00:31:22.943 [2024-11-21 01:57:06.787349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.943 [2024-11-21 01:57:06.787533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.943 [2024-11-21 01:57:06.787543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:22.943 [2024-11-21 01:57:06.787550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.166 ms 00:31:22.943 [2024-11-21 01:57:06.787556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.943 [2024-11-21 01:57:06.788699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.943 [2024-11-21 01:57:06.788722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:22.943 [2024-11-21 01:57:06.788730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.132 ms 00:31:22.943 [2024-11-21 01:57:06.788736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.943 [2024-11-21 01:57:06.789601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.943 [2024-11-21 01:57:06.789631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:22.943 [2024-11-21 01:57:06.789640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.836 ms 00:31:22.943 [2024-11-21 01:57:06.789646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.943 [2024-11-21 01:57:06.797219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.943 [2024-11-21 01:57:06.797246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:22.943 [2024-11-21 01:57:06.797254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.546 ms 00:31:22.943 [2024-11-21 01:57:06.797264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.943 [2024-11-21 01:57:06.801390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.943 [2024-11-21 01:57:06.801417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:22.943 [2024-11-21 01:57:06.801426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.096 ms 00:31:22.943 [2024-11-21 01:57:06.801433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.943 [2024-11-21 01:57:06.801497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.943 [2024-11-21 01:57:06.801506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:22.943 [2024-11-21 01:57:06.801514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:31:22.943 [2024-11-21 01:57:06.801521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.943 [2024-11-21 01:57:06.808646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.943 [2024-11-21 01:57:06.808670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:22.943 [2024-11-21 01:57:06.808677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.109 ms 00:31:22.943 [2024-11-21 01:57:06.808682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.943 [2024-11-21 01:57:06.815810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.943 [2024-11-21 01:57:06.815982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:22.943 [2024-11-21 01:57:06.815995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.101 ms 00:31:22.943 [2024-11-21 01:57:06.816000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.943 [2024-11-21 01:57:06.823104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.943 [2024-11-21 01:57:06.823203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:22.943 [2024-11-21 01:57:06.823214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.078 ms 00:31:22.943 [2024-11-21 01:57:06.823220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.943 [2024-11-21 01:57:06.830961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.943 [2024-11-21 01:57:06.830986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:22.943 [2024-11-21 01:57:06.830993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.695 ms 00:31:22.943 [2024-11-21 01:57:06.830999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.943 [2024-11-21 01:57:06.831024] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:22.943 [2024-11-21 01:57:06.831037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:22.943 [2024-11-21 01:57:06.831045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:22.943 [2024-11-21 01:57:06.831051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:22.943 [2024-11-21 01:57:06.831058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:22.943 [2024-11-21 01:57:06.831064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:22.943 [2024-11-21 01:57:06.831070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:22.943 [2024-11-21 01:57:06.831076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:22.943 [2024-11-21 01:57:06.831083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:22.943 [2024-11-21 01:57:06.831089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:22.943 [2024-11-21 01:57:06.831095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:22.943 [2024-11-21 01:57:06.831101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:22.943 [2024-11-21 01:57:06.831106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:22.943 [2024-11-21 01:57:06.831112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:22.943 [2024-11-21 01:57:06.831118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:22.943 [2024-11-21 01:57:06.831124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:22.943 [2024-11-21 01:57:06.831130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:22.943 [2024-11-21 01:57:06.831135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:22.943 [2024-11-21 01:57:06.831141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:22.943 [2024-11-21 01:57:06.831148] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:22.943 [2024-11-21 01:57:06.831155] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 1d88891f-7bbe-4830-9ae6-f803e7acd276 00:31:22.943 [2024-11-21 01:57:06.831161] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:22.943 [2024-11-21 01:57:06.831168] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:22.943 [2024-11-21 01:57:06.831173] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:22.943 [2024-11-21 01:57:06.831180] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:22.943 [2024-11-21 01:57:06.831185] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:22.943 [2024-11-21 01:57:06.831191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:22.943 [2024-11-21 01:57:06.831196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:22.943 [2024-11-21 01:57:06.831201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:22.943 [2024-11-21 01:57:06.831207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:22.943 [2024-11-21 01:57:06.831214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.944 [2024-11-21 01:57:06.831223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:22.944 [2024-11-21 01:57:06.831230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.191 ms 00:31:22.944 [2024-11-21 01:57:06.831237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.944 [2024-11-21 01:57:06.841573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.944 [2024-11-21 01:57:06.841598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:22.944 [2024-11-21 01:57:06.841607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.323 ms 00:31:22.944 [2024-11-21 01:57:06.841635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.944 [2024-11-21 01:57:06.841945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:22.944 [2024-11-21 01:57:06.841954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:22.944 [2024-11-21 01:57:06.841961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.296 ms 00:31:22.944 [2024-11-21 01:57:06.841967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.944 [2024-11-21 01:57:06.877097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:22.944 [2024-11-21 01:57:06.877221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:22.944 [2024-11-21 01:57:06.877234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:22.944 [2024-11-21 01:57:06.877240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.944 [2024-11-21 01:57:06.877273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:22.944 [2024-11-21 01:57:06.877280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:22.944 [2024-11-21 01:57:06.877286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:22.944 [2024-11-21 01:57:06.877293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.944 [2024-11-21 01:57:06.877371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:22.944 [2024-11-21 01:57:06.877379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:22.944 [2024-11-21 01:57:06.877386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:22.944 [2024-11-21 01:57:06.877393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:22.944 [2024-11-21 01:57:06.877406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:22.944 [2024-11-21 01:57:06.877416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:22.944 [2024-11-21 01:57:06.877422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:22.944 [2024-11-21 01:57:06.877428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.202 [2024-11-21 01:57:06.941136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:23.202 [2024-11-21 01:57:06.941176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:23.202 [2024-11-21 01:57:06.941186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:23.202 [2024-11-21 01:57:06.941193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.202 [2024-11-21 01:57:06.993372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:23.202 [2024-11-21 01:57:06.993415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:23.202 [2024-11-21 01:57:06.993424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:23.202 [2024-11-21 01:57:06.993431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.202 [2024-11-21 01:57:06.993499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:23.202 [2024-11-21 01:57:06.993506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:23.202 [2024-11-21 01:57:06.993513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:23.202 [2024-11-21 01:57:06.993520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.202 [2024-11-21 01:57:06.993570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:23.202 [2024-11-21 01:57:06.993578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:23.202 [2024-11-21 01:57:06.993589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:23.203 [2024-11-21 01:57:06.993603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.203 [2024-11-21 01:57:06.993696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:23.203 [2024-11-21 01:57:06.993705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:23.203 [2024-11-21 01:57:06.993712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:23.203 [2024-11-21 01:57:06.993717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.203 [2024-11-21 01:57:06.993745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:23.203 [2024-11-21 01:57:06.993753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:23.203 [2024-11-21 01:57:06.993760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:23.203 [2024-11-21 01:57:06.993768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.203 [2024-11-21 01:57:06.993805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:23.203 [2024-11-21 01:57:06.993813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:23.203 [2024-11-21 01:57:06.993820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:23.203 [2024-11-21 01:57:06.993825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.203 [2024-11-21 01:57:06.993864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:23.203 [2024-11-21 01:57:06.993871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:23.203 [2024-11-21 01:57:06.993881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:23.203 [2024-11-21 01:57:06.993887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:23.203 [2024-11-21 01:57:06.993997] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 209.028 ms, result 0 00:31:23.769 01:57:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:23.769 01:57:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:23.769 01:57:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:23.769 01:57:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:23.769 01:57:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:23.769 01:57:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:23.769 Remove shared memory files 00:31:23.769 01:57:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:23.769 01:57:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:23.769 01:57:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:23.769 01:57:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:23.769 01:57:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83404 00:31:23.769 01:57:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:23.769 01:57:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:23.769 00:31:23.769 real 1m22.663s 00:31:23.769 user 1m53.121s 00:31:23.769 sys 0m20.121s 00:31:23.769 01:57:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:23.769 01:57:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:23.769 ************************************ 00:31:23.769 END TEST ftl_upgrade_shutdown 00:31:23.769 ************************************ 00:31:23.769 01:57:07 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:31:23.769 01:57:07 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:23.769 01:57:07 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:31:23.769 01:57:07 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:23.769 01:57:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:24.029 ************************************ 00:31:24.029 START TEST ftl_restore_fast 00:31:24.029 ************************************ 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:24.029 * Looking for test storage... 00:31:24.029 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:24.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.029 --rc genhtml_branch_coverage=1 00:31:24.029 --rc genhtml_function_coverage=1 00:31:24.029 --rc genhtml_legend=1 00:31:24.029 --rc geninfo_all_blocks=1 00:31:24.029 --rc geninfo_unexecuted_blocks=1 00:31:24.029 00:31:24.029 ' 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:24.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.029 --rc genhtml_branch_coverage=1 00:31:24.029 --rc genhtml_function_coverage=1 00:31:24.029 --rc genhtml_legend=1 00:31:24.029 --rc geninfo_all_blocks=1 00:31:24.029 --rc geninfo_unexecuted_blocks=1 00:31:24.029 00:31:24.029 ' 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:24.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.029 --rc genhtml_branch_coverage=1 00:31:24.029 --rc genhtml_function_coverage=1 00:31:24.029 --rc genhtml_legend=1 00:31:24.029 --rc geninfo_all_blocks=1 00:31:24.029 --rc geninfo_unexecuted_blocks=1 00:31:24.029 00:31:24.029 ' 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:24.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.029 --rc genhtml_branch_coverage=1 00:31:24.029 --rc genhtml_function_coverage=1 00:31:24.029 --rc genhtml_legend=1 00:31:24.029 --rc geninfo_all_blocks=1 00:31:24.029 --rc geninfo_unexecuted_blocks=1 00:31:24.029 00:31:24.029 ' 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.NLAmOGFFlv 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=83895 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 83895 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 83895 ']' 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.029 01:57:07 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:24.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.030 01:57:07 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.030 01:57:07 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:24.030 01:57:07 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:24.030 [2024-11-21 01:57:07.982011] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:31:24.030 [2024-11-21 01:57:07.982293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83895 ] 00:31:24.288 [2024-11-21 01:57:08.128894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.288 [2024-11-21 01:57:08.219678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.854 01:57:08 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:24.854 01:57:08 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:31:24.854 01:57:08 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:24.854 01:57:08 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:31:24.854 01:57:08 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:24.854 01:57:08 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:31:24.854 01:57:08 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:31:24.854 01:57:08 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:25.112 01:57:08 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:25.112 01:57:08 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:31:25.112 01:57:08 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:25.112 01:57:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:25.112 01:57:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:25.112 01:57:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:25.112 01:57:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:25.112 01:57:08 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:25.370 01:57:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:25.370 { 00:31:25.370 "name": "nvme0n1", 00:31:25.371 "aliases": [ 00:31:25.371 "ca92f8c5-b486-4ea1-93ea-c69bf9d0d641" 00:31:25.371 ], 00:31:25.371 "product_name": "NVMe disk", 00:31:25.371 "block_size": 4096, 00:31:25.371 "num_blocks": 1310720, 00:31:25.371 "uuid": "ca92f8c5-b486-4ea1-93ea-c69bf9d0d641", 00:31:25.371 "numa_id": -1, 00:31:25.371 "assigned_rate_limits": { 00:31:25.371 "rw_ios_per_sec": 0, 00:31:25.371 "rw_mbytes_per_sec": 0, 00:31:25.371 "r_mbytes_per_sec": 0, 00:31:25.371 "w_mbytes_per_sec": 0 00:31:25.371 }, 00:31:25.371 "claimed": true, 00:31:25.371 "claim_type": "read_many_write_one", 00:31:25.371 "zoned": false, 00:31:25.371 "supported_io_types": { 00:31:25.371 "read": true, 00:31:25.371 "write": true, 00:31:25.371 "unmap": true, 00:31:25.371 "flush": true, 00:31:25.371 "reset": true, 00:31:25.371 "nvme_admin": true, 00:31:25.371 "nvme_io": true, 00:31:25.371 "nvme_io_md": false, 00:31:25.371 "write_zeroes": true, 00:31:25.371 "zcopy": false, 00:31:25.371 "get_zone_info": false, 00:31:25.371 "zone_management": false, 00:31:25.371 "zone_append": false, 00:31:25.371 "compare": true, 00:31:25.371 "compare_and_write": false, 00:31:25.371 "abort": true, 00:31:25.371 "seek_hole": false, 00:31:25.371 "seek_data": false, 00:31:25.371 "copy": true, 00:31:25.371 "nvme_iov_md": false 00:31:25.371 }, 00:31:25.371 "driver_specific": { 00:31:25.371 "nvme": [ 00:31:25.371 { 00:31:25.371 "pci_address": "0000:00:11.0", 00:31:25.371 "trid": { 00:31:25.371 "trtype": "PCIe", 00:31:25.371 "traddr": "0000:00:11.0" 00:31:25.371 }, 00:31:25.371 "ctrlr_data": { 00:31:25.371 "cntlid": 0, 00:31:25.371 "vendor_id": "0x1b36", 00:31:25.371 "model_number": "QEMU NVMe Ctrl", 00:31:25.371 "serial_number": "12341", 00:31:25.371 "firmware_revision": "8.0.0", 00:31:25.371 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:25.371 "oacs": { 00:31:25.371 "security": 0, 00:31:25.371 "format": 1, 00:31:25.371 "firmware": 0, 00:31:25.371 "ns_manage": 1 00:31:25.371 }, 00:31:25.371 "multi_ctrlr": false, 00:31:25.371 "ana_reporting": false 00:31:25.371 }, 00:31:25.371 "vs": { 00:31:25.371 "nvme_version": "1.4" 00:31:25.371 }, 00:31:25.372 "ns_data": { 00:31:25.372 "id": 1, 00:31:25.372 "can_share": false 00:31:25.372 } 00:31:25.372 } 00:31:25.372 ], 00:31:25.372 "mp_policy": "active_passive" 00:31:25.372 } 00:31:25.372 } 00:31:25.372 ]' 00:31:25.372 01:57:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:25.372 01:57:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:25.372 01:57:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:25.372 01:57:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:25.372 01:57:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:25.372 01:57:09 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:31:25.372 01:57:09 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:31:25.372 01:57:09 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:25.372 01:57:09 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:31:25.372 01:57:09 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:25.372 01:57:09 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:25.631 01:57:09 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=8038d3ae-a187-4f2b-ba06-815002938da2 00:31:25.631 01:57:09 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:31:25.631 01:57:09 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8038d3ae-a187-4f2b-ba06-815002938da2 00:31:25.890 01:57:09 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:25.890 01:57:09 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=55f008ac-a7b4-4ae3-8989-c2495a5c7dca 00:31:25.890 01:57:09 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 55f008ac-a7b4-4ae3-8989-c2495a5c7dca 00:31:26.148 01:57:10 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=44734068-8026-470c-8475-5826daec0726 00:31:26.148 01:57:10 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:31:26.148 01:57:10 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 44734068-8026-470c-8475-5826daec0726 00:31:26.148 01:57:10 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:31:26.148 01:57:10 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:26.148 01:57:10 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=44734068-8026-470c-8475-5826daec0726 00:31:26.148 01:57:10 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:31:26.148 01:57:10 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 44734068-8026-470c-8475-5826daec0726 00:31:26.148 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=44734068-8026-470c-8475-5826daec0726 00:31:26.148 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:26.148 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:26.148 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:26.148 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 44734068-8026-470c-8475-5826daec0726 00:31:26.406 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:26.406 { 00:31:26.406 "name": "44734068-8026-470c-8475-5826daec0726", 00:31:26.406 "aliases": [ 00:31:26.406 "lvs/nvme0n1p0" 00:31:26.406 ], 00:31:26.406 "product_name": "Logical Volume", 00:31:26.406 "block_size": 4096, 00:31:26.406 "num_blocks": 26476544, 00:31:26.406 "uuid": "44734068-8026-470c-8475-5826daec0726", 00:31:26.406 "assigned_rate_limits": { 00:31:26.406 "rw_ios_per_sec": 0, 00:31:26.406 "rw_mbytes_per_sec": 0, 00:31:26.406 "r_mbytes_per_sec": 0, 00:31:26.406 "w_mbytes_per_sec": 0 00:31:26.406 }, 00:31:26.406 "claimed": false, 00:31:26.406 "zoned": false, 00:31:26.406 "supported_io_types": { 00:31:26.406 "read": true, 00:31:26.406 "write": true, 00:31:26.406 "unmap": true, 00:31:26.406 "flush": false, 00:31:26.406 "reset": true, 00:31:26.406 "nvme_admin": false, 00:31:26.406 "nvme_io": false, 00:31:26.406 "nvme_io_md": false, 00:31:26.406 "write_zeroes": true, 00:31:26.406 "zcopy": false, 00:31:26.406 "get_zone_info": false, 00:31:26.406 "zone_management": false, 00:31:26.406 "zone_append": false, 00:31:26.406 "compare": false, 00:31:26.406 "compare_and_write": false, 00:31:26.406 "abort": false, 00:31:26.406 "seek_hole": true, 00:31:26.406 "seek_data": true, 00:31:26.406 "copy": false, 00:31:26.406 "nvme_iov_md": false 00:31:26.406 }, 00:31:26.406 "driver_specific": { 00:31:26.406 "lvol": { 00:31:26.406 "lvol_store_uuid": "55f008ac-a7b4-4ae3-8989-c2495a5c7dca", 00:31:26.406 "base_bdev": "nvme0n1", 00:31:26.406 "thin_provision": true, 00:31:26.406 "num_allocated_clusters": 0, 00:31:26.406 "snapshot": false, 00:31:26.406 "clone": false, 00:31:26.406 "esnap_clone": false 00:31:26.407 } 00:31:26.407 } 00:31:26.407 } 00:31:26.407 ]' 00:31:26.407 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:26.407 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:26.407 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:26.407 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:26.407 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:26.407 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:26.407 01:57:10 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:31:26.407 01:57:10 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:31:26.407 01:57:10 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:26.665 01:57:10 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:26.665 01:57:10 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:26.665 01:57:10 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 44734068-8026-470c-8475-5826daec0726 00:31:26.665 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=44734068-8026-470c-8475-5826daec0726 00:31:26.665 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:26.665 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:26.665 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:26.665 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 44734068-8026-470c-8475-5826daec0726 00:31:26.924 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:26.924 { 00:31:26.924 "name": "44734068-8026-470c-8475-5826daec0726", 00:31:26.924 "aliases": [ 00:31:26.924 "lvs/nvme0n1p0" 00:31:26.924 ], 00:31:26.924 "product_name": "Logical Volume", 00:31:26.924 "block_size": 4096, 00:31:26.924 "num_blocks": 26476544, 00:31:26.924 "uuid": "44734068-8026-470c-8475-5826daec0726", 00:31:26.924 "assigned_rate_limits": { 00:31:26.924 "rw_ios_per_sec": 0, 00:31:26.924 "rw_mbytes_per_sec": 0, 00:31:26.924 "r_mbytes_per_sec": 0, 00:31:26.924 "w_mbytes_per_sec": 0 00:31:26.924 }, 00:31:26.924 "claimed": false, 00:31:26.924 "zoned": false, 00:31:26.924 "supported_io_types": { 00:31:26.924 "read": true, 00:31:26.924 "write": true, 00:31:26.924 "unmap": true, 00:31:26.924 "flush": false, 00:31:26.924 "reset": true, 00:31:26.924 "nvme_admin": false, 00:31:26.924 "nvme_io": false, 00:31:26.924 "nvme_io_md": false, 00:31:26.924 "write_zeroes": true, 00:31:26.924 "zcopy": false, 00:31:26.924 "get_zone_info": false, 00:31:26.924 "zone_management": false, 00:31:26.924 "zone_append": false, 00:31:26.924 "compare": false, 00:31:26.924 "compare_and_write": false, 00:31:26.924 "abort": false, 00:31:26.924 "seek_hole": true, 00:31:26.924 "seek_data": true, 00:31:26.924 "copy": false, 00:31:26.924 "nvme_iov_md": false 00:31:26.924 }, 00:31:26.924 "driver_specific": { 00:31:26.924 "lvol": { 00:31:26.924 "lvol_store_uuid": "55f008ac-a7b4-4ae3-8989-c2495a5c7dca", 00:31:26.924 "base_bdev": "nvme0n1", 00:31:26.924 "thin_provision": true, 00:31:26.924 "num_allocated_clusters": 0, 00:31:26.924 "snapshot": false, 00:31:26.924 "clone": false, 00:31:26.924 "esnap_clone": false 00:31:26.924 } 00:31:26.924 } 00:31:26.924 } 00:31:26.924 ]' 00:31:26.924 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:26.924 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:26.924 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:26.924 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:26.924 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:26.924 01:57:10 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:26.924 01:57:10 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:31:26.924 01:57:10 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:27.184 01:57:11 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:31:27.184 01:57:11 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 44734068-8026-470c-8475-5826daec0726 00:31:27.184 01:57:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=44734068-8026-470c-8475-5826daec0726 00:31:27.184 01:57:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:27.184 01:57:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:27.184 01:57:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:27.184 01:57:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 44734068-8026-470c-8475-5826daec0726 00:31:27.479 01:57:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:27.479 { 00:31:27.479 "name": "44734068-8026-470c-8475-5826daec0726", 00:31:27.479 "aliases": [ 00:31:27.479 "lvs/nvme0n1p0" 00:31:27.479 ], 00:31:27.479 "product_name": "Logical Volume", 00:31:27.479 "block_size": 4096, 00:31:27.479 "num_blocks": 26476544, 00:31:27.479 "uuid": "44734068-8026-470c-8475-5826daec0726", 00:31:27.479 "assigned_rate_limits": { 00:31:27.479 "rw_ios_per_sec": 0, 00:31:27.479 "rw_mbytes_per_sec": 0, 00:31:27.479 "r_mbytes_per_sec": 0, 00:31:27.479 "w_mbytes_per_sec": 0 00:31:27.479 }, 00:31:27.479 "claimed": false, 00:31:27.479 "zoned": false, 00:31:27.479 "supported_io_types": { 00:31:27.479 "read": true, 00:31:27.479 "write": true, 00:31:27.479 "unmap": true, 00:31:27.479 "flush": false, 00:31:27.479 "reset": true, 00:31:27.479 "nvme_admin": false, 00:31:27.479 "nvme_io": false, 00:31:27.479 "nvme_io_md": false, 00:31:27.479 "write_zeroes": true, 00:31:27.479 "zcopy": false, 00:31:27.479 "get_zone_info": false, 00:31:27.479 "zone_management": false, 00:31:27.479 "zone_append": false, 00:31:27.479 "compare": false, 00:31:27.479 "compare_and_write": false, 00:31:27.479 "abort": false, 00:31:27.479 "seek_hole": true, 00:31:27.479 "seek_data": true, 00:31:27.479 "copy": false, 00:31:27.479 "nvme_iov_md": false 00:31:27.479 }, 00:31:27.479 "driver_specific": { 00:31:27.479 "lvol": { 00:31:27.479 "lvol_store_uuid": "55f008ac-a7b4-4ae3-8989-c2495a5c7dca", 00:31:27.479 "base_bdev": "nvme0n1", 00:31:27.479 "thin_provision": true, 00:31:27.479 "num_allocated_clusters": 0, 00:31:27.479 "snapshot": false, 00:31:27.479 "clone": false, 00:31:27.479 "esnap_clone": false 00:31:27.479 } 00:31:27.479 } 00:31:27.479 } 00:31:27.479 ]' 00:31:27.479 01:57:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:27.479 01:57:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:27.479 01:57:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:27.480 01:57:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:27.480 01:57:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:27.480 01:57:11 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:27.480 01:57:11 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:31:27.480 01:57:11 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 44734068-8026-470c-8475-5826daec0726 --l2p_dram_limit 10' 00:31:27.480 01:57:11 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:31:27.480 01:57:11 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:31:27.480 01:57:11 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:27.480 01:57:11 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:31:27.480 01:57:11 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:31:27.480 01:57:11 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 44734068-8026-470c-8475-5826daec0726 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:31:27.750 [2024-11-21 01:57:11.467331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.750 [2024-11-21 01:57:11.467483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:27.750 [2024-11-21 01:57:11.467504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:27.750 [2024-11-21 01:57:11.467512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.750 [2024-11-21 01:57:11.467558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.750 [2024-11-21 01:57:11.467566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:27.750 [2024-11-21 01:57:11.467575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:31:27.751 [2024-11-21 01:57:11.467580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.751 [2024-11-21 01:57:11.467601] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:27.751 [2024-11-21 01:57:11.468122] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:27.751 [2024-11-21 01:57:11.468140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.751 [2024-11-21 01:57:11.468147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:27.751 [2024-11-21 01:57:11.468155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:31:27.751 [2024-11-21 01:57:11.468161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.751 [2024-11-21 01:57:11.468187] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 156fbf37-752a-40f3-a42b-1d6f226a350c 00:31:27.751 [2024-11-21 01:57:11.469462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.751 [2024-11-21 01:57:11.469495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:27.751 [2024-11-21 01:57:11.469504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:31:27.751 [2024-11-21 01:57:11.469516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.751 [2024-11-21 01:57:11.476452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.751 [2024-11-21 01:57:11.476481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:27.751 [2024-11-21 01:57:11.476492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.897 ms 00:31:27.751 [2024-11-21 01:57:11.476499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.751 [2024-11-21 01:57:11.476602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.751 [2024-11-21 01:57:11.476621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:27.751 [2024-11-21 01:57:11.476629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:31:27.751 [2024-11-21 01:57:11.476639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.751 [2024-11-21 01:57:11.476679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.751 [2024-11-21 01:57:11.476689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:27.751 [2024-11-21 01:57:11.476696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:27.751 [2024-11-21 01:57:11.476705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.751 [2024-11-21 01:57:11.476721] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:27.751 [2024-11-21 01:57:11.480044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.751 [2024-11-21 01:57:11.480072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:27.751 [2024-11-21 01:57:11.480082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.325 ms 00:31:27.751 [2024-11-21 01:57:11.480088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.751 [2024-11-21 01:57:11.480115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.751 [2024-11-21 01:57:11.480122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:27.751 [2024-11-21 01:57:11.480130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:27.751 [2024-11-21 01:57:11.480135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.751 [2024-11-21 01:57:11.480151] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:27.751 [2024-11-21 01:57:11.480260] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:27.751 [2024-11-21 01:57:11.480274] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:27.751 [2024-11-21 01:57:11.480282] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:27.751 [2024-11-21 01:57:11.480292] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:27.751 [2024-11-21 01:57:11.480299] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:27.751 [2024-11-21 01:57:11.480307] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:27.751 [2024-11-21 01:57:11.480313] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:27.751 [2024-11-21 01:57:11.480322] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:27.751 [2024-11-21 01:57:11.480328] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:27.751 [2024-11-21 01:57:11.480336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.751 [2024-11-21 01:57:11.480342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:27.751 [2024-11-21 01:57:11.480350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:31:27.751 [2024-11-21 01:57:11.480362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.751 [2024-11-21 01:57:11.480429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.751 [2024-11-21 01:57:11.480436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:27.751 [2024-11-21 01:57:11.480444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:27.751 [2024-11-21 01:57:11.480450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.751 [2024-11-21 01:57:11.480532] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:27.751 [2024-11-21 01:57:11.480541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:27.751 [2024-11-21 01:57:11.480553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:27.751 [2024-11-21 01:57:11.480559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.751 [2024-11-21 01:57:11.480567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:27.751 [2024-11-21 01:57:11.480572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:27.751 [2024-11-21 01:57:11.480578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:27.751 [2024-11-21 01:57:11.480583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:27.751 [2024-11-21 01:57:11.480591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:27.751 [2024-11-21 01:57:11.480596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:27.751 [2024-11-21 01:57:11.480603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:27.751 [2024-11-21 01:57:11.480608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:27.751 [2024-11-21 01:57:11.480626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:27.751 [2024-11-21 01:57:11.480633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:27.751 [2024-11-21 01:57:11.480640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:27.751 [2024-11-21 01:57:11.480645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.751 [2024-11-21 01:57:11.480655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:27.751 [2024-11-21 01:57:11.480660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:27.751 [2024-11-21 01:57:11.480666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.751 [2024-11-21 01:57:11.480671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:27.751 [2024-11-21 01:57:11.480678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:27.751 [2024-11-21 01:57:11.480682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:27.751 [2024-11-21 01:57:11.480689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:27.751 [2024-11-21 01:57:11.480695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:27.751 [2024-11-21 01:57:11.480701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:27.751 [2024-11-21 01:57:11.480706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:27.751 [2024-11-21 01:57:11.480713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:27.751 [2024-11-21 01:57:11.480718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:27.751 [2024-11-21 01:57:11.480724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:27.751 [2024-11-21 01:57:11.480729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:27.751 [2024-11-21 01:57:11.480737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:27.751 [2024-11-21 01:57:11.480743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:27.751 [2024-11-21 01:57:11.480751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:27.751 [2024-11-21 01:57:11.480756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:27.751 [2024-11-21 01:57:11.480763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:27.751 [2024-11-21 01:57:11.480768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:27.751 [2024-11-21 01:57:11.480774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:27.751 [2024-11-21 01:57:11.480780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:27.751 [2024-11-21 01:57:11.480787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:27.751 [2024-11-21 01:57:11.480792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.751 [2024-11-21 01:57:11.480798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:27.751 [2024-11-21 01:57:11.480803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:27.751 [2024-11-21 01:57:11.480810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.751 [2024-11-21 01:57:11.480814] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:27.751 [2024-11-21 01:57:11.480822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:27.751 [2024-11-21 01:57:11.480829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:27.751 [2024-11-21 01:57:11.480836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.751 [2024-11-21 01:57:11.480842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:27.751 [2024-11-21 01:57:11.480850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:27.751 [2024-11-21 01:57:11.480854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:27.751 [2024-11-21 01:57:11.480862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:27.751 [2024-11-21 01:57:11.480868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:27.751 [2024-11-21 01:57:11.480874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:27.752 [2024-11-21 01:57:11.480882] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:27.752 [2024-11-21 01:57:11.480890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:27.752 [2024-11-21 01:57:11.480899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:27.752 [2024-11-21 01:57:11.480906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:27.752 [2024-11-21 01:57:11.480912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:27.752 [2024-11-21 01:57:11.480919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:27.752 [2024-11-21 01:57:11.480924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:27.752 [2024-11-21 01:57:11.480930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:27.752 [2024-11-21 01:57:11.480936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:27.752 [2024-11-21 01:57:11.480943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:27.752 [2024-11-21 01:57:11.480949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:27.752 [2024-11-21 01:57:11.480958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:27.752 [2024-11-21 01:57:11.480963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:27.752 [2024-11-21 01:57:11.480970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:27.752 [2024-11-21 01:57:11.480976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:27.752 [2024-11-21 01:57:11.480983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:27.752 [2024-11-21 01:57:11.480989] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:27.752 [2024-11-21 01:57:11.480997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:27.752 [2024-11-21 01:57:11.481003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:27.752 [2024-11-21 01:57:11.481009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:27.752 [2024-11-21 01:57:11.481015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:27.752 [2024-11-21 01:57:11.481021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:27.752 [2024-11-21 01:57:11.481027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.752 [2024-11-21 01:57:11.481035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:27.752 [2024-11-21 01:57:11.481041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:31:27.752 [2024-11-21 01:57:11.481048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.752 [2024-11-21 01:57:11.481090] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:27.752 [2024-11-21 01:57:11.481101] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:31.960 [2024-11-21 01:57:15.475151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.960 [2024-11-21 01:57:15.475388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:31.960 [2024-11-21 01:57:15.475408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3994.045 ms 00:31:31.960 [2024-11-21 01:57:15.475417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.960 [2024-11-21 01:57:15.499191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.960 [2024-11-21 01:57:15.499233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:31.960 [2024-11-21 01:57:15.499245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.595 ms 00:31:31.960 [2024-11-21 01:57:15.499253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.960 [2024-11-21 01:57:15.499365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.960 [2024-11-21 01:57:15.499375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:31.960 [2024-11-21 01:57:15.499383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:31:31.960 [2024-11-21 01:57:15.499393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.960 [2024-11-21 01:57:15.525957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.960 [2024-11-21 01:57:15.526109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:31.960 [2024-11-21 01:57:15.526124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.535 ms 00:31:31.961 [2024-11-21 01:57:15.526133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.961 [2024-11-21 01:57:15.526159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.961 [2024-11-21 01:57:15.526171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:31.961 [2024-11-21 01:57:15.526178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:31.961 [2024-11-21 01:57:15.526185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.961 [2024-11-21 01:57:15.526608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.961 [2024-11-21 01:57:15.526640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:31.961 [2024-11-21 01:57:15.526648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:31:31.961 [2024-11-21 01:57:15.526655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.961 [2024-11-21 01:57:15.526738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.961 [2024-11-21 01:57:15.526747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:31.961 [2024-11-21 01:57:15.526756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:31:31.961 [2024-11-21 01:57:15.526766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.961 [2024-11-21 01:57:15.539887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.961 [2024-11-21 01:57:15.539919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:31.961 [2024-11-21 01:57:15.539927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.108 ms 00:31:31.961 [2024-11-21 01:57:15.539935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.961 [2024-11-21 01:57:15.549729] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:31.961 [2024-11-21 01:57:15.552692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.961 [2024-11-21 01:57:15.552817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:31.961 [2024-11-21 01:57:15.552833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.696 ms 00:31:31.961 [2024-11-21 01:57:15.552840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.961 [2024-11-21 01:57:15.627673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.961 [2024-11-21 01:57:15.627706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:31.961 [2024-11-21 01:57:15.627719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.808 ms 00:31:31.961 [2024-11-21 01:57:15.627728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.961 [2024-11-21 01:57:15.627878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.961 [2024-11-21 01:57:15.627890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:31.961 [2024-11-21 01:57:15.627902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:31:31.961 [2024-11-21 01:57:15.627909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.961 [2024-11-21 01:57:15.646568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.961 [2024-11-21 01:57:15.646596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:31.961 [2024-11-21 01:57:15.646608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.623 ms 00:31:31.961 [2024-11-21 01:57:15.646625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.961 [2024-11-21 01:57:15.664305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.961 [2024-11-21 01:57:15.664329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:31.961 [2024-11-21 01:57:15.664340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.646 ms 00:31:31.961 [2024-11-21 01:57:15.664346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.961 [2024-11-21 01:57:15.664824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.961 [2024-11-21 01:57:15.664833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:31.961 [2024-11-21 01:57:15.664842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:31:31.961 [2024-11-21 01:57:15.664848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.961 [2024-11-21 01:57:15.727058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.961 [2024-11-21 01:57:15.727085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:31.961 [2024-11-21 01:57:15.727098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.182 ms 00:31:31.961 [2024-11-21 01:57:15.727104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.961 [2024-11-21 01:57:15.746903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.961 [2024-11-21 01:57:15.747036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:31.961 [2024-11-21 01:57:15.747053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.741 ms 00:31:31.961 [2024-11-21 01:57:15.747060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.961 [2024-11-21 01:57:15.765264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.961 [2024-11-21 01:57:15.765290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:31.961 [2024-11-21 01:57:15.765300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.175 ms 00:31:31.961 [2024-11-21 01:57:15.765307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.961 [2024-11-21 01:57:15.783873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.961 [2024-11-21 01:57:15.783902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:31.961 [2024-11-21 01:57:15.783912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.534 ms 00:31:31.961 [2024-11-21 01:57:15.783918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.961 [2024-11-21 01:57:15.783952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.961 [2024-11-21 01:57:15.783960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:31.961 [2024-11-21 01:57:15.783971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:31.961 [2024-11-21 01:57:15.783978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.961 [2024-11-21 01:57:15.784053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.961 [2024-11-21 01:57:15.784061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:31.961 [2024-11-21 01:57:15.784073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:31:31.961 [2024-11-21 01:57:15.784079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.961 [2024-11-21 01:57:15.784884] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4317.160 ms, result 0 00:31:31.961 { 00:31:31.961 "name": "ftl0", 00:31:31.961 "uuid": "156fbf37-752a-40f3-a42b-1d6f226a350c" 00:31:31.961 } 00:31:31.961 01:57:15 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:31:31.961 01:57:15 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:32.220 01:57:16 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:31:32.220 01:57:16 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:31:32.481 [2024-11-21 01:57:16.180390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.481 [2024-11-21 01:57:16.180513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:32.481 [2024-11-21 01:57:16.180528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:32.481 [2024-11-21 01:57:16.180541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.481 [2024-11-21 01:57:16.180564] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:32.481 [2024-11-21 01:57:16.182845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.481 [2024-11-21 01:57:16.182869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:32.481 [2024-11-21 01:57:16.182879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.266 ms 00:31:32.481 [2024-11-21 01:57:16.182886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.481 [2024-11-21 01:57:16.183083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.481 [2024-11-21 01:57:16.183091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:32.481 [2024-11-21 01:57:16.183102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:31:32.481 [2024-11-21 01:57:16.183108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.481 [2024-11-21 01:57:16.185634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.481 [2024-11-21 01:57:16.185694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:32.481 [2024-11-21 01:57:16.185735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.511 ms 00:31:32.481 [2024-11-21 01:57:16.185753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.481 [2024-11-21 01:57:16.190563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.481 [2024-11-21 01:57:16.190656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:32.481 [2024-11-21 01:57:16.190704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.782 ms 00:31:32.481 [2024-11-21 01:57:16.190723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.481 [2024-11-21 01:57:16.208707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.481 [2024-11-21 01:57:16.208798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:32.481 [2024-11-21 01:57:16.208840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.914 ms 00:31:32.481 [2024-11-21 01:57:16.208858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.481 [2024-11-21 01:57:16.221910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.481 [2024-11-21 01:57:16.222008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:32.481 [2024-11-21 01:57:16.222055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.015 ms 00:31:32.481 [2024-11-21 01:57:16.222073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.481 [2024-11-21 01:57:16.222192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.481 [2024-11-21 01:57:16.222214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:32.481 [2024-11-21 01:57:16.222232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:31:32.481 [2024-11-21 01:57:16.222247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.481 [2024-11-21 01:57:16.240259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.481 [2024-11-21 01:57:16.240344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:32.481 [2024-11-21 01:57:16.240386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.988 ms 00:31:32.481 [2024-11-21 01:57:16.240403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.481 [2024-11-21 01:57:16.258388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.481 [2024-11-21 01:57:16.258486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:32.481 [2024-11-21 01:57:16.258526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.951 ms 00:31:32.481 [2024-11-21 01:57:16.258543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.481 [2024-11-21 01:57:16.275862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.481 [2024-11-21 01:57:16.275944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:32.481 [2024-11-21 01:57:16.275958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.282 ms 00:31:32.481 [2024-11-21 01:57:16.275964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.481 [2024-11-21 01:57:16.293316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.481 [2024-11-21 01:57:16.293342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:32.481 [2024-11-21 01:57:16.293351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.299 ms 00:31:32.481 [2024-11-21 01:57:16.293357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.481 [2024-11-21 01:57:16.293385] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:32.481 [2024-11-21 01:57:16.293396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:32.481 [2024-11-21 01:57:16.293570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.293993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.294000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.294007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.294015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.294021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.294028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.294033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.294040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.294046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.294054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.294060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.294067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.294073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.294082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.294088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.294096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:32.482 [2024-11-21 01:57:16.294107] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:32.482 [2024-11-21 01:57:16.294117] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 156fbf37-752a-40f3-a42b-1d6f226a350c 00:31:32.482 [2024-11-21 01:57:16.294123] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:32.482 [2024-11-21 01:57:16.294132] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:32.482 [2024-11-21 01:57:16.294138] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:32.482 [2024-11-21 01:57:16.294147] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:32.482 [2024-11-21 01:57:16.294153] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:32.482 [2024-11-21 01:57:16.294160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:32.482 [2024-11-21 01:57:16.294166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:32.482 [2024-11-21 01:57:16.294173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:32.482 [2024-11-21 01:57:16.294177] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:32.482 [2024-11-21 01:57:16.294185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.482 [2024-11-21 01:57:16.294190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:32.482 [2024-11-21 01:57:16.294202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:31:32.482 [2024-11-21 01:57:16.294207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.482 [2024-11-21 01:57:16.303852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.482 [2024-11-21 01:57:16.303875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:32.482 [2024-11-21 01:57:16.303885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.618 ms 00:31:32.482 [2024-11-21 01:57:16.303891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.482 [2024-11-21 01:57:16.304173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.483 [2024-11-21 01:57:16.304181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:32.483 [2024-11-21 01:57:16.304189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:31:32.483 [2024-11-21 01:57:16.304197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.483 [2024-11-21 01:57:16.339153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.483 [2024-11-21 01:57:16.339268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:32.483 [2024-11-21 01:57:16.339285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.483 [2024-11-21 01:57:16.339291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.483 [2024-11-21 01:57:16.339340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.483 [2024-11-21 01:57:16.339348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:32.483 [2024-11-21 01:57:16.339356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.483 [2024-11-21 01:57:16.339364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.483 [2024-11-21 01:57:16.339421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.483 [2024-11-21 01:57:16.339430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:32.483 [2024-11-21 01:57:16.339438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.483 [2024-11-21 01:57:16.339444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.483 [2024-11-21 01:57:16.339461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.483 [2024-11-21 01:57:16.339468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:32.483 [2024-11-21 01:57:16.339476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.483 [2024-11-21 01:57:16.339482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.483 [2024-11-21 01:57:16.401822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.483 [2024-11-21 01:57:16.401948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:32.483 [2024-11-21 01:57:16.401964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.483 [2024-11-21 01:57:16.401972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.741 [2024-11-21 01:57:16.452510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.742 [2024-11-21 01:57:16.452542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:32.742 [2024-11-21 01:57:16.452552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.742 [2024-11-21 01:57:16.452562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.742 [2024-11-21 01:57:16.452661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.742 [2024-11-21 01:57:16.452670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:32.742 [2024-11-21 01:57:16.452679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.742 [2024-11-21 01:57:16.452685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.742 [2024-11-21 01:57:16.452725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.742 [2024-11-21 01:57:16.452733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:32.742 [2024-11-21 01:57:16.452742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.742 [2024-11-21 01:57:16.452747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.742 [2024-11-21 01:57:16.452840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.742 [2024-11-21 01:57:16.452849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:32.742 [2024-11-21 01:57:16.452857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.742 [2024-11-21 01:57:16.452863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.742 [2024-11-21 01:57:16.452894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.742 [2024-11-21 01:57:16.452902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:32.742 [2024-11-21 01:57:16.452909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.742 [2024-11-21 01:57:16.452915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.742 [2024-11-21 01:57:16.452953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.742 [2024-11-21 01:57:16.452963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:32.742 [2024-11-21 01:57:16.452971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.742 [2024-11-21 01:57:16.452977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.742 [2024-11-21 01:57:16.453019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.742 [2024-11-21 01:57:16.453028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:32.742 [2024-11-21 01:57:16.453036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.742 [2024-11-21 01:57:16.453042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.742 [2024-11-21 01:57:16.453166] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 272.734 ms, result 0 00:31:32.742 true 00:31:32.742 01:57:16 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 83895 00:31:32.742 01:57:16 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 83895 ']' 00:31:32.742 01:57:16 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 83895 00:31:32.742 01:57:16 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:31:32.742 01:57:16 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:32.742 01:57:16 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83895 00:31:32.742 killing process with pid 83895 00:31:32.742 01:57:16 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:32.742 01:57:16 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:32.742 01:57:16 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83895' 00:31:32.742 01:57:16 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 83895 00:31:32.742 01:57:16 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 83895 00:31:39.323 01:57:22 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:31:43.527 262144+0 records in 00:31:43.527 262144+0 records out 00:31:43.528 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.12375 s, 260 MB/s 00:31:43.528 01:57:26 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:44.913 01:57:28 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:44.913 [2024-11-21 01:57:28.720567] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:31:44.913 [2024-11-21 01:57:28.720673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84123 ] 00:31:45.174 [2024-11-21 01:57:28.874212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.174 [2024-11-21 01:57:28.989646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.439 [2024-11-21 01:57:29.316869] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:45.439 [2024-11-21 01:57:29.317282] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:45.700 [2024-11-21 01:57:29.481835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.700 [2024-11-21 01:57:29.481898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:45.700 [2024-11-21 01:57:29.481921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:45.700 [2024-11-21 01:57:29.481931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.700 [2024-11-21 01:57:29.481992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.700 [2024-11-21 01:57:29.482003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:45.700 [2024-11-21 01:57:29.482015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:31:45.700 [2024-11-21 01:57:29.482023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.700 [2024-11-21 01:57:29.482044] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:45.700 [2024-11-21 01:57:29.482865] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:45.700 [2024-11-21 01:57:29.482890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.700 [2024-11-21 01:57:29.482900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:45.700 [2024-11-21 01:57:29.482913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.851 ms 00:31:45.700 [2024-11-21 01:57:29.482922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.700 [2024-11-21 01:57:29.485171] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:45.700 [2024-11-21 01:57:29.500472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.700 [2024-11-21 01:57:29.500525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:45.700 [2024-11-21 01:57:29.500539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.303 ms 00:31:45.700 [2024-11-21 01:57:29.500549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.700 [2024-11-21 01:57:29.500651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.700 [2024-11-21 01:57:29.500663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:45.700 [2024-11-21 01:57:29.500673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:31:45.700 [2024-11-21 01:57:29.500682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.700 [2024-11-21 01:57:29.512229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.700 [2024-11-21 01:57:29.512275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:45.700 [2024-11-21 01:57:29.512286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.464 ms 00:31:45.700 [2024-11-21 01:57:29.512296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.700 [2024-11-21 01:57:29.512387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.700 [2024-11-21 01:57:29.512397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:45.700 [2024-11-21 01:57:29.512407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:31:45.700 [2024-11-21 01:57:29.512416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.700 [2024-11-21 01:57:29.512481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.700 [2024-11-21 01:57:29.512493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:45.700 [2024-11-21 01:57:29.512502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:45.700 [2024-11-21 01:57:29.512511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.700 [2024-11-21 01:57:29.512538] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:45.700 [2024-11-21 01:57:29.517177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.700 [2024-11-21 01:57:29.517219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:45.700 [2024-11-21 01:57:29.517231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.646 ms 00:31:45.700 [2024-11-21 01:57:29.517244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.700 [2024-11-21 01:57:29.517283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.700 [2024-11-21 01:57:29.517292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:45.700 [2024-11-21 01:57:29.517302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:45.700 [2024-11-21 01:57:29.517310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.700 [2024-11-21 01:57:29.517349] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:45.700 [2024-11-21 01:57:29.517375] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:45.700 [2024-11-21 01:57:29.517416] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:45.700 [2024-11-21 01:57:29.517440] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:45.700 [2024-11-21 01:57:29.517553] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:45.700 [2024-11-21 01:57:29.517565] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:45.700 [2024-11-21 01:57:29.517577] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:45.700 [2024-11-21 01:57:29.517590] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:45.700 [2024-11-21 01:57:29.517600] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:45.700 [2024-11-21 01:57:29.517634] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:45.700 [2024-11-21 01:57:29.517643] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:45.700 [2024-11-21 01:57:29.517653] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:45.700 [2024-11-21 01:57:29.517662] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:45.700 [2024-11-21 01:57:29.517676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.700 [2024-11-21 01:57:29.517684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:45.700 [2024-11-21 01:57:29.517694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:31:45.701 [2024-11-21 01:57:29.517703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.701 [2024-11-21 01:57:29.517806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.701 [2024-11-21 01:57:29.517817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:45.701 [2024-11-21 01:57:29.517826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:31:45.701 [2024-11-21 01:57:29.517834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.701 [2024-11-21 01:57:29.517945] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:45.701 [2024-11-21 01:57:29.517959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:45.701 [2024-11-21 01:57:29.517970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:45.701 [2024-11-21 01:57:29.517979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:45.701 [2024-11-21 01:57:29.517993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:45.701 [2024-11-21 01:57:29.518002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:45.701 [2024-11-21 01:57:29.518010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:45.701 [2024-11-21 01:57:29.518019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:45.701 [2024-11-21 01:57:29.518026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:45.701 [2024-11-21 01:57:29.518035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:45.701 [2024-11-21 01:57:29.518042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:45.701 [2024-11-21 01:57:29.518054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:45.701 [2024-11-21 01:57:29.518062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:45.701 [2024-11-21 01:57:29.518070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:45.701 [2024-11-21 01:57:29.518079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:45.701 [2024-11-21 01:57:29.518093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:45.701 [2024-11-21 01:57:29.518100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:45.701 [2024-11-21 01:57:29.518108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:45.701 [2024-11-21 01:57:29.518114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:45.701 [2024-11-21 01:57:29.518121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:45.701 [2024-11-21 01:57:29.518128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:45.701 [2024-11-21 01:57:29.518135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:45.701 [2024-11-21 01:57:29.518142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:45.701 [2024-11-21 01:57:29.518148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:45.701 [2024-11-21 01:57:29.518155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:45.701 [2024-11-21 01:57:29.518162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:45.701 [2024-11-21 01:57:29.518170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:45.701 [2024-11-21 01:57:29.518177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:45.701 [2024-11-21 01:57:29.518184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:45.701 [2024-11-21 01:57:29.518190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:45.701 [2024-11-21 01:57:29.518197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:45.701 [2024-11-21 01:57:29.518204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:45.701 [2024-11-21 01:57:29.518212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:45.701 [2024-11-21 01:57:29.518219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:45.701 [2024-11-21 01:57:29.518226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:45.701 [2024-11-21 01:57:29.518233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:45.701 [2024-11-21 01:57:29.518240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:45.701 [2024-11-21 01:57:29.518246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:45.701 [2024-11-21 01:57:29.518254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:45.701 [2024-11-21 01:57:29.518260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:45.701 [2024-11-21 01:57:29.518268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:45.701 [2024-11-21 01:57:29.518276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:45.701 [2024-11-21 01:57:29.518282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:45.701 [2024-11-21 01:57:29.518290] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:45.701 [2024-11-21 01:57:29.518297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:45.701 [2024-11-21 01:57:29.518306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:45.701 [2024-11-21 01:57:29.518313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:45.701 [2024-11-21 01:57:29.518322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:45.701 [2024-11-21 01:57:29.518329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:45.701 [2024-11-21 01:57:29.518338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:45.701 [2024-11-21 01:57:29.518345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:45.701 [2024-11-21 01:57:29.518352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:45.701 [2024-11-21 01:57:29.518359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:45.701 [2024-11-21 01:57:29.518368] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:45.701 [2024-11-21 01:57:29.518378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:45.701 [2024-11-21 01:57:29.518386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:45.701 [2024-11-21 01:57:29.518395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:45.701 [2024-11-21 01:57:29.518403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:45.701 [2024-11-21 01:57:29.518427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:45.701 [2024-11-21 01:57:29.518434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:45.701 [2024-11-21 01:57:29.518441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:45.701 [2024-11-21 01:57:29.518449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:45.701 [2024-11-21 01:57:29.518456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:45.701 [2024-11-21 01:57:29.518464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:45.701 [2024-11-21 01:57:29.518472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:45.701 [2024-11-21 01:57:29.518479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:45.701 [2024-11-21 01:57:29.518488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:45.701 [2024-11-21 01:57:29.518495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:45.701 [2024-11-21 01:57:29.518504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:45.701 [2024-11-21 01:57:29.518511] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:45.701 [2024-11-21 01:57:29.518523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:45.701 [2024-11-21 01:57:29.518532] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:45.701 [2024-11-21 01:57:29.518539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:45.701 [2024-11-21 01:57:29.518547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:45.701 [2024-11-21 01:57:29.518555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:45.701 [2024-11-21 01:57:29.518570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.701 [2024-11-21 01:57:29.518579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:45.701 [2024-11-21 01:57:29.518587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:31:45.701 [2024-11-21 01:57:29.518595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.701 [2024-11-21 01:57:29.557015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.701 [2024-11-21 01:57:29.557287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:45.701 [2024-11-21 01:57:29.557310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.354 ms 00:31:45.701 [2024-11-21 01:57:29.557321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.701 [2024-11-21 01:57:29.557432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.701 [2024-11-21 01:57:29.557442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:45.701 [2024-11-21 01:57:29.557451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:31:45.701 [2024-11-21 01:57:29.557460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.701 [2024-11-21 01:57:29.606888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.701 [2024-11-21 01:57:29.606943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:45.701 [2024-11-21 01:57:29.606958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.362 ms 00:31:45.701 [2024-11-21 01:57:29.606968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.701 [2024-11-21 01:57:29.607021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.701 [2024-11-21 01:57:29.607032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:45.701 [2024-11-21 01:57:29.607042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:45.701 [2024-11-21 01:57:29.607055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.701 [2024-11-21 01:57:29.607853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.702 [2024-11-21 01:57:29.607880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:45.702 [2024-11-21 01:57:29.607893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:31:45.702 [2024-11-21 01:57:29.607904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.702 [2024-11-21 01:57:29.608079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.702 [2024-11-21 01:57:29.608092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:45.702 [2024-11-21 01:57:29.608101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:31:45.702 [2024-11-21 01:57:29.608118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.702 [2024-11-21 01:57:29.626341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.702 [2024-11-21 01:57:29.626388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:45.702 [2024-11-21 01:57:29.626404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.199 ms 00:31:45.702 [2024-11-21 01:57:29.626427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.702 [2024-11-21 01:57:29.641936] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:45.702 [2024-11-21 01:57:29.641988] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:45.702 [2024-11-21 01:57:29.642003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.702 [2024-11-21 01:57:29.642012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:45.702 [2024-11-21 01:57:29.642023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.453 ms 00:31:45.702 [2024-11-21 01:57:29.642031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.963 [2024-11-21 01:57:29.668671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.963 [2024-11-21 01:57:29.668913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:45.963 [2024-11-21 01:57:29.668945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.583 ms 00:31:45.963 [2024-11-21 01:57:29.668955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.964 [2024-11-21 01:57:29.681929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.964 [2024-11-21 01:57:29.681988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:45.964 [2024-11-21 01:57:29.682000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.924 ms 00:31:45.964 [2024-11-21 01:57:29.682008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.964 [2024-11-21 01:57:29.694609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.964 [2024-11-21 01:57:29.694666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:45.964 [2024-11-21 01:57:29.694677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.548 ms 00:31:45.964 [2024-11-21 01:57:29.694686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.964 [2024-11-21 01:57:29.695338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.964 [2024-11-21 01:57:29.695366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:45.964 [2024-11-21 01:57:29.695378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:31:45.964 [2024-11-21 01:57:29.695387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.964 [2024-11-21 01:57:29.769601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.964 [2024-11-21 01:57:29.769670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:45.964 [2024-11-21 01:57:29.769686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.189 ms 00:31:45.964 [2024-11-21 01:57:29.769704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.964 [2024-11-21 01:57:29.781263] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:45.964 [2024-11-21 01:57:29.784465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.964 [2024-11-21 01:57:29.784720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:45.964 [2024-11-21 01:57:29.784743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.701 ms 00:31:45.964 [2024-11-21 01:57:29.784753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.964 [2024-11-21 01:57:29.784848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.964 [2024-11-21 01:57:29.784861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:45.964 [2024-11-21 01:57:29.784873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:45.964 [2024-11-21 01:57:29.784883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.964 [2024-11-21 01:57:29.784966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.964 [2024-11-21 01:57:29.784978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:45.964 [2024-11-21 01:57:29.784987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:31:45.964 [2024-11-21 01:57:29.784997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.964 [2024-11-21 01:57:29.785022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.964 [2024-11-21 01:57:29.785033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:45.964 [2024-11-21 01:57:29.785042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:45.964 [2024-11-21 01:57:29.785050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.964 [2024-11-21 01:57:29.785087] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:45.964 [2024-11-21 01:57:29.785100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.964 [2024-11-21 01:57:29.785112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:45.964 [2024-11-21 01:57:29.785122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:45.964 [2024-11-21 01:57:29.785132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.964 [2024-11-21 01:57:29.812219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.964 [2024-11-21 01:57:29.812419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:45.964 [2024-11-21 01:57:29.812441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.062 ms 00:31:45.964 [2024-11-21 01:57:29.812452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.964 [2024-11-21 01:57:29.812549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:45.964 [2024-11-21 01:57:29.812560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:45.964 [2024-11-21 01:57:29.812570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:31:45.964 [2024-11-21 01:57:29.812579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:45.964 [2024-11-21 01:57:29.814092] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 331.665 ms, result 0 00:31:46.906  [2024-11-21T01:57:32.246Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-21T01:57:33.180Z] Copying: 41/1024 [MB] (21 MBps) [2024-11-21T01:57:34.122Z] Copying: 59/1024 [MB] (18 MBps) [2024-11-21T01:57:35.058Z] Copying: 75/1024 [MB] (16 MBps) [2024-11-21T01:57:35.992Z] Copying: 95/1024 [MB] (20 MBps) [2024-11-21T01:57:36.927Z] Copying: 109/1024 [MB] (13 MBps) [2024-11-21T01:57:37.861Z] Copying: 125/1024 [MB] (15 MBps) [2024-11-21T01:57:39.236Z] Copying: 143/1024 [MB] (18 MBps) [2024-11-21T01:57:40.170Z] Copying: 163/1024 [MB] (19 MBps) [2024-11-21T01:57:41.105Z] Copying: 182/1024 [MB] (19 MBps) [2024-11-21T01:57:42.047Z] Copying: 198/1024 [MB] (16 MBps) [2024-11-21T01:57:42.981Z] Copying: 209/1024 [MB] (10 MBps) [2024-11-21T01:57:43.916Z] Copying: 221/1024 [MB] (11 MBps) [2024-11-21T01:57:44.857Z] Copying: 233/1024 [MB] (12 MBps) [2024-11-21T01:57:45.908Z] Copying: 244/1024 [MB] (10 MBps) [2024-11-21T01:57:46.842Z] Copying: 256/1024 [MB] (12 MBps) [2024-11-21T01:57:48.217Z] Copying: 269/1024 [MB] (12 MBps) [2024-11-21T01:57:49.153Z] Copying: 281/1024 [MB] (12 MBps) [2024-11-21T01:57:50.097Z] Copying: 294/1024 [MB] (12 MBps) [2024-11-21T01:57:51.036Z] Copying: 306/1024 [MB] (12 MBps) [2024-11-21T01:57:51.972Z] Copying: 317/1024 [MB] (10 MBps) [2024-11-21T01:57:52.907Z] Copying: 329/1024 [MB] (12 MBps) [2024-11-21T01:57:53.842Z] Copying: 342/1024 [MB] (12 MBps) [2024-11-21T01:57:55.217Z] Copying: 354/1024 [MB] (12 MBps) [2024-11-21T01:57:56.152Z] Copying: 366/1024 [MB] (12 MBps) [2024-11-21T01:57:57.086Z] Copying: 378/1024 [MB] (12 MBps) [2024-11-21T01:57:58.021Z] Copying: 391/1024 [MB] (12 MBps) [2024-11-21T01:57:58.956Z] Copying: 404/1024 [MB] (12 MBps) [2024-11-21T01:57:59.898Z] Copying: 416/1024 [MB] (12 MBps) [2024-11-21T01:58:00.838Z] Copying: 427/1024 [MB] (11 MBps) [2024-11-21T01:58:02.213Z] Copying: 438/1024 [MB] (10 MBps) [2024-11-21T01:58:03.147Z] Copying: 450/1024 [MB] (12 MBps) [2024-11-21T01:58:04.081Z] Copying: 463/1024 [MB] (12 MBps) [2024-11-21T01:58:05.015Z] Copying: 475/1024 [MB] (12 MBps) [2024-11-21T01:58:05.950Z] Copying: 488/1024 [MB] (12 MBps) [2024-11-21T01:58:06.888Z] Copying: 500/1024 [MB] (12 MBps) [2024-11-21T01:58:07.832Z] Copying: 513/1024 [MB] (12 MBps) [2024-11-21T01:58:09.217Z] Copying: 524/1024 [MB] (11 MBps) [2024-11-21T01:58:10.152Z] Copying: 535/1024 [MB] (10 MBps) [2024-11-21T01:58:11.094Z] Copying: 545/1024 [MB] (10 MBps) [2024-11-21T01:58:12.031Z] Copying: 556/1024 [MB] (11 MBps) [2024-11-21T01:58:12.974Z] Copying: 569/1024 [MB] (12 MBps) [2024-11-21T01:58:13.916Z] Copying: 582/1024 [MB] (12 MBps) [2024-11-21T01:58:14.913Z] Copying: 592/1024 [MB] (10 MBps) [2024-11-21T01:58:15.853Z] Copying: 604/1024 [MB] (12 MBps) [2024-11-21T01:58:17.233Z] Copying: 618/1024 [MB] (13 MBps) [2024-11-21T01:58:18.168Z] Copying: 630/1024 [MB] (12 MBps) [2024-11-21T01:58:19.107Z] Copying: 642/1024 [MB] (12 MBps) [2024-11-21T01:58:20.040Z] Copying: 657/1024 [MB] (14 MBps) [2024-11-21T01:58:20.977Z] Copying: 672/1024 [MB] (15 MBps) [2024-11-21T01:58:21.912Z] Copying: 685/1024 [MB] (13 MBps) [2024-11-21T01:58:22.848Z] Copying: 701/1024 [MB] (15 MBps) [2024-11-21T01:58:24.221Z] Copying: 716/1024 [MB] (15 MBps) [2024-11-21T01:58:25.156Z] Copying: 728/1024 [MB] (12 MBps) [2024-11-21T01:58:26.093Z] Copying: 741/1024 [MB] (12 MBps) [2024-11-21T01:58:27.028Z] Copying: 753/1024 [MB] (12 MBps) [2024-11-21T01:58:27.969Z] Copying: 765/1024 [MB] (12 MBps) [2024-11-21T01:58:28.907Z] Copying: 777/1024 [MB] (11 MBps) [2024-11-21T01:58:29.841Z] Copying: 787/1024 [MB] (10 MBps) [2024-11-21T01:58:31.215Z] Copying: 800/1024 [MB] (12 MBps) [2024-11-21T01:58:32.151Z] Copying: 811/1024 [MB] (11 MBps) [2024-11-21T01:58:33.088Z] Copying: 824/1024 [MB] (12 MBps) [2024-11-21T01:58:34.031Z] Copying: 839/1024 [MB] (15 MBps) [2024-11-21T01:58:34.976Z] Copying: 855/1024 [MB] (16 MBps) [2024-11-21T01:58:35.920Z] Copying: 865/1024 [MB] (10 MBps) [2024-11-21T01:58:36.855Z] Copying: 875/1024 [MB] (10 MBps) [2024-11-21T01:58:38.230Z] Copying: 888/1024 [MB] (12 MBps) [2024-11-21T01:58:39.165Z] Copying: 900/1024 [MB] (12 MBps) [2024-11-21T01:58:40.097Z] Copying: 913/1024 [MB] (12 MBps) [2024-11-21T01:58:41.039Z] Copying: 926/1024 [MB] (12 MBps) [2024-11-21T01:58:41.983Z] Copying: 944/1024 [MB] (18 MBps) [2024-11-21T01:58:42.926Z] Copying: 955/1024 [MB] (10 MBps) [2024-11-21T01:58:43.940Z] Copying: 966/1024 [MB] (11 MBps) [2024-11-21T01:58:44.883Z] Copying: 977/1024 [MB] (10 MBps) [2024-11-21T01:58:46.266Z] Copying: 988/1024 [MB] (10 MBps) [2024-11-21T01:58:46.839Z] Copying: 998/1024 [MB] (10 MBps) [2024-11-21T01:58:47.785Z] Copying: 1008/1024 [MB] (10 MBps) [2024-11-21T01:58:47.785Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-21 01:58:47.677305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.828 [2024-11-21 01:58:47.677343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:03.828 [2024-11-21 01:58:47.677355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:03.828 [2024-11-21 01:58:47.677362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.828 [2024-11-21 01:58:47.677378] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:03.828 [2024-11-21 01:58:47.679563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.828 [2024-11-21 01:58:47.679584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:03.828 [2024-11-21 01:58:47.679592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.174 ms 00:33:03.828 [2024-11-21 01:58:47.679599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.828 [2024-11-21 01:58:47.681113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.828 [2024-11-21 01:58:47.681143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:03.828 [2024-11-21 01:58:47.681150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.484 ms 00:33:03.828 [2024-11-21 01:58:47.681157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.828 [2024-11-21 01:58:47.681176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.828 [2024-11-21 01:58:47.681183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:03.828 [2024-11-21 01:58:47.681190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:03.828 [2024-11-21 01:58:47.681195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.828 [2024-11-21 01:58:47.681234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.828 [2024-11-21 01:58:47.681243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:03.828 [2024-11-21 01:58:47.681249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:33:03.828 [2024-11-21 01:58:47.681255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.828 [2024-11-21 01:58:47.681265] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:03.828 [2024-11-21 01:58:47.681274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:03.828 [2024-11-21 01:58:47.681553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:03.829 [2024-11-21 01:58:47.681982] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:03.829 [2024-11-21 01:58:47.681988] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 156fbf37-752a-40f3-a42b-1d6f226a350c 00:33:03.829 [2024-11-21 01:58:47.681994] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:03.829 [2024-11-21 01:58:47.682000] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:03.829 [2024-11-21 01:58:47.682005] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:03.829 [2024-11-21 01:58:47.682011] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:03.829 [2024-11-21 01:58:47.682018] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:03.829 [2024-11-21 01:58:47.682024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:03.829 [2024-11-21 01:58:47.682029] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:03.829 [2024-11-21 01:58:47.682034] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:03.829 [2024-11-21 01:58:47.682039] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:03.829 [2024-11-21 01:58:47.682045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.829 [2024-11-21 01:58:47.682051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:03.829 [2024-11-21 01:58:47.682057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:33:03.829 [2024-11-21 01:58:47.682062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.829 [2024-11-21 01:58:47.691861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.829 [2024-11-21 01:58:47.691886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:03.829 [2024-11-21 01:58:47.691898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.787 ms 00:33:03.829 [2024-11-21 01:58:47.691904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.829 [2024-11-21 01:58:47.692168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:03.829 [2024-11-21 01:58:47.692174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:03.829 [2024-11-21 01:58:47.692180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:33:03.829 [2024-11-21 01:58:47.692185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.829 [2024-11-21 01:58:47.718110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.829 [2024-11-21 01:58:47.718142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:03.829 [2024-11-21 01:58:47.718150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.830 [2024-11-21 01:58:47.718155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.830 [2024-11-21 01:58:47.718197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.830 [2024-11-21 01:58:47.718203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:03.830 [2024-11-21 01:58:47.718209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.830 [2024-11-21 01:58:47.718214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.830 [2024-11-21 01:58:47.718259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.830 [2024-11-21 01:58:47.718266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:03.830 [2024-11-21 01:58:47.718276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.830 [2024-11-21 01:58:47.718281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.830 [2024-11-21 01:58:47.718292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.830 [2024-11-21 01:58:47.718298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:03.830 [2024-11-21 01:58:47.718304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.830 [2024-11-21 01:58:47.718311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:03.830 [2024-11-21 01:58:47.777178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:03.830 [2024-11-21 01:58:47.777207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:03.830 [2024-11-21 01:58:47.777219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:03.830 [2024-11-21 01:58:47.777225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.092 [2024-11-21 01:58:47.826338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:04.092 [2024-11-21 01:58:47.826375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:04.092 [2024-11-21 01:58:47.826383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:04.092 [2024-11-21 01:58:47.826389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.092 [2024-11-21 01:58:47.826428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:04.092 [2024-11-21 01:58:47.826435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:04.092 [2024-11-21 01:58:47.826441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:04.092 [2024-11-21 01:58:47.826451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.092 [2024-11-21 01:58:47.826490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:04.092 [2024-11-21 01:58:47.826496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:04.092 [2024-11-21 01:58:47.826503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:04.092 [2024-11-21 01:58:47.826509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.092 [2024-11-21 01:58:47.826563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:04.092 [2024-11-21 01:58:47.826570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:04.092 [2024-11-21 01:58:47.826576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:04.092 [2024-11-21 01:58:47.826582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.092 [2024-11-21 01:58:47.826608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:04.092 [2024-11-21 01:58:47.826635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:04.092 [2024-11-21 01:58:47.826641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:04.092 [2024-11-21 01:58:47.826647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.092 [2024-11-21 01:58:47.826674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:04.092 [2024-11-21 01:58:47.826681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:04.092 [2024-11-21 01:58:47.826688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:04.092 [2024-11-21 01:58:47.826693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.092 [2024-11-21 01:58:47.826728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:04.092 [2024-11-21 01:58:47.826736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:04.092 [2024-11-21 01:58:47.826742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:04.092 [2024-11-21 01:58:47.826748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.092 [2024-11-21 01:58:47.826838] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 149.508 ms, result 0 00:33:04.665 00:33:04.665 00:33:04.665 01:58:48 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:33:04.665 [2024-11-21 01:58:48.439117] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:33:04.665 [2024-11-21 01:58:48.439238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84945 ] 00:33:04.665 [2024-11-21 01:58:48.594781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.927 [2024-11-21 01:58:48.669764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.927 [2024-11-21 01:58:48.876142] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:04.927 [2024-11-21 01:58:48.876195] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:05.190 [2024-11-21 01:58:49.023338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.190 [2024-11-21 01:58:49.023376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:05.190 [2024-11-21 01:58:49.023389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:05.190 [2024-11-21 01:58:49.023396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.190 [2024-11-21 01:58:49.023429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.190 [2024-11-21 01:58:49.023437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:05.190 [2024-11-21 01:58:49.023445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:33:05.190 [2024-11-21 01:58:49.023450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.190 [2024-11-21 01:58:49.023462] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:05.190 [2024-11-21 01:58:49.024029] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:05.190 [2024-11-21 01:58:49.024042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.190 [2024-11-21 01:58:49.024049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:05.190 [2024-11-21 01:58:49.024055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:33:05.190 [2024-11-21 01:58:49.024060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.190 [2024-11-21 01:58:49.024254] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:05.190 [2024-11-21 01:58:49.024274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.190 [2024-11-21 01:58:49.024280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:05.190 [2024-11-21 01:58:49.024289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:33:05.190 [2024-11-21 01:58:49.024295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.190 [2024-11-21 01:58:49.024347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.190 [2024-11-21 01:58:49.024355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:05.190 [2024-11-21 01:58:49.024361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:33:05.190 [2024-11-21 01:58:49.024367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.190 [2024-11-21 01:58:49.024564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.190 [2024-11-21 01:58:49.024574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:05.190 [2024-11-21 01:58:49.024580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:33:05.190 [2024-11-21 01:58:49.024585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.190 [2024-11-21 01:58:49.024642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.190 [2024-11-21 01:58:49.024650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:05.190 [2024-11-21 01:58:49.024656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:33:05.190 [2024-11-21 01:58:49.024662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.190 [2024-11-21 01:58:49.024677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.190 [2024-11-21 01:58:49.024684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:05.190 [2024-11-21 01:58:49.024690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:05.190 [2024-11-21 01:58:49.024697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.190 [2024-11-21 01:58:49.024709] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:05.190 [2024-11-21 01:58:49.027522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.190 [2024-11-21 01:58:49.027656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:05.190 [2024-11-21 01:58:49.027669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.815 ms 00:33:05.190 [2024-11-21 01:58:49.027675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.190 [2024-11-21 01:58:49.027702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.190 [2024-11-21 01:58:49.027709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:05.190 [2024-11-21 01:58:49.027715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:05.190 [2024-11-21 01:58:49.027720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.190 [2024-11-21 01:58:49.027754] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:05.190 [2024-11-21 01:58:49.027771] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:05.190 [2024-11-21 01:58:49.027798] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:05.190 [2024-11-21 01:58:49.027810] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:05.190 [2024-11-21 01:58:49.027889] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:05.190 [2024-11-21 01:58:49.027897] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:05.190 [2024-11-21 01:58:49.027905] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:05.190 [2024-11-21 01:58:49.027912] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:05.190 [2024-11-21 01:58:49.027919] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:05.190 [2024-11-21 01:58:49.027926] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:05.190 [2024-11-21 01:58:49.027933] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:05.190 [2024-11-21 01:58:49.027938] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:05.190 [2024-11-21 01:58:49.027944] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:05.190 [2024-11-21 01:58:49.027950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.190 [2024-11-21 01:58:49.027955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:05.190 [2024-11-21 01:58:49.027961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:33:05.190 [2024-11-21 01:58:49.027966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.190 [2024-11-21 01:58:49.028029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.190 [2024-11-21 01:58:49.028035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:05.190 [2024-11-21 01:58:49.028040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:33:05.190 [2024-11-21 01:58:49.028047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.190 [2024-11-21 01:58:49.028121] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:05.190 [2024-11-21 01:58:49.028129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:05.190 [2024-11-21 01:58:49.028135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:05.190 [2024-11-21 01:58:49.028141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:05.190 [2024-11-21 01:58:49.028146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:05.190 [2024-11-21 01:58:49.028152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:05.190 [2024-11-21 01:58:49.028157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:05.190 [2024-11-21 01:58:49.028162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:05.190 [2024-11-21 01:58:49.028168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:05.190 [2024-11-21 01:58:49.028173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:05.190 [2024-11-21 01:58:49.028178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:05.190 [2024-11-21 01:58:49.028184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:05.190 [2024-11-21 01:58:49.028189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:05.190 [2024-11-21 01:58:49.028195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:05.191 [2024-11-21 01:58:49.028200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:05.191 [2024-11-21 01:58:49.028205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:05.191 [2024-11-21 01:58:49.028211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:05.191 [2024-11-21 01:58:49.028219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:05.191 [2024-11-21 01:58:49.028224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:05.191 [2024-11-21 01:58:49.028229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:05.191 [2024-11-21 01:58:49.028234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:05.191 [2024-11-21 01:58:49.028239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:05.191 [2024-11-21 01:58:49.028244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:05.191 [2024-11-21 01:58:49.028249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:05.191 [2024-11-21 01:58:49.028253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:05.191 [2024-11-21 01:58:49.028258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:05.191 [2024-11-21 01:58:49.028263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:05.191 [2024-11-21 01:58:49.028267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:05.191 [2024-11-21 01:58:49.028272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:05.191 [2024-11-21 01:58:49.028277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:05.191 [2024-11-21 01:58:49.028282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:05.191 [2024-11-21 01:58:49.028287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:05.191 [2024-11-21 01:58:49.028292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:05.191 [2024-11-21 01:58:49.028297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:05.191 [2024-11-21 01:58:49.028302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:05.191 [2024-11-21 01:58:49.028306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:05.191 [2024-11-21 01:58:49.028311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:05.191 [2024-11-21 01:58:49.028316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:05.191 [2024-11-21 01:58:49.028320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:05.191 [2024-11-21 01:58:49.028325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:05.191 [2024-11-21 01:58:49.028330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:05.191 [2024-11-21 01:58:49.028334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:05.191 [2024-11-21 01:58:49.028339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:05.191 [2024-11-21 01:58:49.028344] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:05.191 [2024-11-21 01:58:49.028351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:05.191 [2024-11-21 01:58:49.028357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:05.191 [2024-11-21 01:58:49.028363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:05.191 [2024-11-21 01:58:49.028368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:05.191 [2024-11-21 01:58:49.028373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:05.191 [2024-11-21 01:58:49.028378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:05.191 [2024-11-21 01:58:49.028383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:05.191 [2024-11-21 01:58:49.028388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:05.191 [2024-11-21 01:58:49.028392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:05.191 [2024-11-21 01:58:49.028398] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:05.191 [2024-11-21 01:58:49.028406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:05.191 [2024-11-21 01:58:49.028413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:05.191 [2024-11-21 01:58:49.028418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:05.191 [2024-11-21 01:58:49.028423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:05.191 [2024-11-21 01:58:49.028428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:05.191 [2024-11-21 01:58:49.028434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:05.191 [2024-11-21 01:58:49.028439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:05.191 [2024-11-21 01:58:49.028444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:05.191 [2024-11-21 01:58:49.028449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:05.191 [2024-11-21 01:58:49.028454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:05.191 [2024-11-21 01:58:49.028459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:05.191 [2024-11-21 01:58:49.028465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:05.191 [2024-11-21 01:58:49.028470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:05.191 [2024-11-21 01:58:49.028475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:05.191 [2024-11-21 01:58:49.028480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:05.191 [2024-11-21 01:58:49.028485] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:05.191 [2024-11-21 01:58:49.028491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:05.191 [2024-11-21 01:58:49.028497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:05.191 [2024-11-21 01:58:49.028503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:05.191 [2024-11-21 01:58:49.028508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:05.191 [2024-11-21 01:58:49.028514] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:05.191 [2024-11-21 01:58:49.028519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.191 [2024-11-21 01:58:49.028525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:05.191 [2024-11-21 01:58:49.028530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:33:05.191 [2024-11-21 01:58:49.028536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.191 [2024-11-21 01:58:49.047007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.191 [2024-11-21 01:58:49.047110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:05.191 [2024-11-21 01:58:49.047123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.442 ms 00:33:05.191 [2024-11-21 01:58:49.047128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.191 [2024-11-21 01:58:49.047189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.191 [2024-11-21 01:58:49.047196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:05.191 [2024-11-21 01:58:49.047202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:33:05.191 [2024-11-21 01:58:49.047210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.191 [2024-11-21 01:58:49.091005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.191 [2024-11-21 01:58:49.091036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:05.191 [2024-11-21 01:58:49.091046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.757 ms 00:33:05.191 [2024-11-21 01:58:49.091052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.191 [2024-11-21 01:58:49.091085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.191 [2024-11-21 01:58:49.091092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:05.191 [2024-11-21 01:58:49.091099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:05.191 [2024-11-21 01:58:49.091105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.191 [2024-11-21 01:58:49.091176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.191 [2024-11-21 01:58:49.091185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:05.191 [2024-11-21 01:58:49.091191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:33:05.191 [2024-11-21 01:58:49.091196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.191 [2024-11-21 01:58:49.091283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.191 [2024-11-21 01:58:49.091291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:05.191 [2024-11-21 01:58:49.091297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:33:05.191 [2024-11-21 01:58:49.091302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.191 [2024-11-21 01:58:49.101749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.191 [2024-11-21 01:58:49.101776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:05.191 [2024-11-21 01:58:49.101784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.433 ms 00:33:05.191 [2024-11-21 01:58:49.101789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.191 [2024-11-21 01:58:49.101872] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:05.191 [2024-11-21 01:58:49.101881] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:05.191 [2024-11-21 01:58:49.101888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.191 [2024-11-21 01:58:49.101894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:05.191 [2024-11-21 01:58:49.101902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:33:05.191 [2024-11-21 01:58:49.101908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.191 [2024-11-21 01:58:49.111150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.192 [2024-11-21 01:58:49.111270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:05.192 [2024-11-21 01:58:49.111282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.230 ms 00:33:05.192 [2024-11-21 01:58:49.111288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.192 [2024-11-21 01:58:49.111375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.192 [2024-11-21 01:58:49.111382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:05.192 [2024-11-21 01:58:49.111388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:33:05.192 [2024-11-21 01:58:49.111394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.192 [2024-11-21 01:58:49.111422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.192 [2024-11-21 01:58:49.111429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:05.192 [2024-11-21 01:58:49.111436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:05.192 [2024-11-21 01:58:49.111441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.192 [2024-11-21 01:58:49.111894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.192 [2024-11-21 01:58:49.111908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:05.192 [2024-11-21 01:58:49.111914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:33:05.192 [2024-11-21 01:58:49.111920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.192 [2024-11-21 01:58:49.111932] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:05.192 [2024-11-21 01:58:49.111941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.192 [2024-11-21 01:58:49.111947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:05.192 [2024-11-21 01:58:49.111953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:05.192 [2024-11-21 01:58:49.111959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.192 [2024-11-21 01:58:49.120450] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:05.192 [2024-11-21 01:58:49.120551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.192 [2024-11-21 01:58:49.120559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:05.192 [2024-11-21 01:58:49.120565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.569 ms 00:33:05.192 [2024-11-21 01:58:49.120571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.192 [2024-11-21 01:58:49.122118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.192 [2024-11-21 01:58:49.122209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:05.192 [2024-11-21 01:58:49.122224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.534 ms 00:33:05.192 [2024-11-21 01:58:49.122230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.192 [2024-11-21 01:58:49.122292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.192 [2024-11-21 01:58:49.122299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:05.192 [2024-11-21 01:58:49.122306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:33:05.192 [2024-11-21 01:58:49.122312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.192 [2024-11-21 01:58:49.122350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.192 [2024-11-21 01:58:49.122358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:05.192 [2024-11-21 01:58:49.122367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:05.192 [2024-11-21 01:58:49.122372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.192 [2024-11-21 01:58:49.122393] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:05.192 [2024-11-21 01:58:49.122401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.192 [2024-11-21 01:58:49.122406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:05.192 [2024-11-21 01:58:49.122411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:05.192 [2024-11-21 01:58:49.122417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.192 [2024-11-21 01:58:49.140746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.192 [2024-11-21 01:58:49.140850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:05.192 [2024-11-21 01:58:49.140863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.314 ms 00:33:05.192 [2024-11-21 01:58:49.140868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.192 [2024-11-21 01:58:49.140919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.192 [2024-11-21 01:58:49.140927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:05.192 [2024-11-21 01:58:49.140933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:33:05.192 [2024-11-21 01:58:49.140938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.192 [2024-11-21 01:58:49.141744] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 118.084 ms, result 0 00:33:06.578  [2024-11-21T01:58:51.479Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-21T01:58:52.425Z] Copying: 33/1024 [MB] (15 MBps) [2024-11-21T01:58:53.369Z] Copying: 46/1024 [MB] (13 MBps) [2024-11-21T01:58:54.312Z] Copying: 61/1024 [MB] (14 MBps) [2024-11-21T01:58:55.700Z] Copying: 71/1024 [MB] (10 MBps) [2024-11-21T01:58:56.644Z] Copying: 90/1024 [MB] (18 MBps) [2024-11-21T01:58:57.590Z] Copying: 111/1024 [MB] (21 MBps) [2024-11-21T01:58:58.536Z] Copying: 131/1024 [MB] (19 MBps) [2024-11-21T01:58:59.481Z] Copying: 149/1024 [MB] (17 MBps) [2024-11-21T01:59:00.422Z] Copying: 166/1024 [MB] (17 MBps) [2024-11-21T01:59:01.364Z] Copying: 184/1024 [MB] (17 MBps) [2024-11-21T01:59:02.305Z] Copying: 227/1024 [MB] (43 MBps) [2024-11-21T01:59:03.691Z] Copying: 248/1024 [MB] (21 MBps) [2024-11-21T01:59:04.630Z] Copying: 274/1024 [MB] (25 MBps) [2024-11-21T01:59:05.575Z] Copying: 285/1024 [MB] (11 MBps) [2024-11-21T01:59:06.519Z] Copying: 303/1024 [MB] (18 MBps) [2024-11-21T01:59:07.462Z] Copying: 316/1024 [MB] (12 MBps) [2024-11-21T01:59:08.407Z] Copying: 330/1024 [MB] (13 MBps) [2024-11-21T01:59:09.360Z] Copying: 351/1024 [MB] (21 MBps) [2024-11-21T01:59:10.303Z] Copying: 374/1024 [MB] (23 MBps) [2024-11-21T01:59:11.691Z] Copying: 394/1024 [MB] (19 MBps) [2024-11-21T01:59:12.339Z] Copying: 412/1024 [MB] (18 MBps) [2024-11-21T01:59:13.389Z] Copying: 434/1024 [MB] (22 MBps) [2024-11-21T01:59:14.331Z] Copying: 459/1024 [MB] (24 MBps) [2024-11-21T01:59:15.718Z] Copying: 475/1024 [MB] (16 MBps) [2024-11-21T01:59:16.292Z] Copying: 496/1024 [MB] (20 MBps) [2024-11-21T01:59:17.678Z] Copying: 516/1024 [MB] (20 MBps) [2024-11-21T01:59:18.622Z] Copying: 549/1024 [MB] (33 MBps) [2024-11-21T01:59:19.564Z] Copying: 566/1024 [MB] (16 MBps) [2024-11-21T01:59:20.505Z] Copying: 580/1024 [MB] (13 MBps) [2024-11-21T01:59:21.447Z] Copying: 600/1024 [MB] (19 MBps) [2024-11-21T01:59:22.392Z] Copying: 617/1024 [MB] (17 MBps) [2024-11-21T01:59:23.352Z] Copying: 632/1024 [MB] (15 MBps) [2024-11-21T01:59:24.294Z] Copying: 645/1024 [MB] (13 MBps) [2024-11-21T01:59:25.678Z] Copying: 657/1024 [MB] (11 MBps) [2024-11-21T01:59:26.619Z] Copying: 672/1024 [MB] (15 MBps) [2024-11-21T01:59:27.561Z] Copying: 684/1024 [MB] (12 MBps) [2024-11-21T01:59:28.505Z] Copying: 699/1024 [MB] (15 MBps) [2024-11-21T01:59:29.446Z] Copying: 723/1024 [MB] (23 MBps) [2024-11-21T01:59:30.388Z] Copying: 745/1024 [MB] (21 MBps) [2024-11-21T01:59:31.330Z] Copying: 756/1024 [MB] (11 MBps) [2024-11-21T01:59:32.714Z] Copying: 767/1024 [MB] (11 MBps) [2024-11-21T01:59:33.286Z] Copying: 779/1024 [MB] (11 MBps) [2024-11-21T01:59:34.671Z] Copying: 791/1024 [MB] (12 MBps) [2024-11-21T01:59:35.615Z] Copying: 802/1024 [MB] (10 MBps) [2024-11-21T01:59:36.558Z] Copying: 813/1024 [MB] (11 MBps) [2024-11-21T01:59:37.500Z] Copying: 824/1024 [MB] (11 MBps) [2024-11-21T01:59:38.446Z] Copying: 836/1024 [MB] (11 MBps) [2024-11-21T01:59:39.387Z] Copying: 847/1024 [MB] (11 MBps) [2024-11-21T01:59:40.328Z] Copying: 859/1024 [MB] (11 MBps) [2024-11-21T01:59:41.714Z] Copying: 870/1024 [MB] (11 MBps) [2024-11-21T01:59:42.286Z] Copying: 882/1024 [MB] (11 MBps) [2024-11-21T01:59:43.673Z] Copying: 894/1024 [MB] (11 MBps) [2024-11-21T01:59:44.616Z] Copying: 905/1024 [MB] (11 MBps) [2024-11-21T01:59:45.556Z] Copying: 917/1024 [MB] (11 MBps) [2024-11-21T01:59:46.498Z] Copying: 929/1024 [MB] (11 MBps) [2024-11-21T01:59:47.439Z] Copying: 940/1024 [MB] (11 MBps) [2024-11-21T01:59:48.383Z] Copying: 952/1024 [MB] (11 MBps) [2024-11-21T01:59:49.326Z] Copying: 964/1024 [MB] (12 MBps) [2024-11-21T01:59:50.711Z] Copying: 986/1024 [MB] (22 MBps) [2024-11-21T01:59:51.284Z] Copying: 998/1024 [MB] (11 MBps) [2024-11-21T01:59:52.230Z] Copying: 1009/1024 [MB] (11 MBps) [2024-11-21T01:59:52.230Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-21 01:59:52.180169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:08.273 [2024-11-21 01:59:52.180419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:08.273 [2024-11-21 01:59:52.180445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:08.273 [2024-11-21 01:59:52.180454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.273 [2024-11-21 01:59:52.180486] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:08.273 [2024-11-21 01:59:52.183957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:08.273 [2024-11-21 01:59:52.184149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:08.273 [2024-11-21 01:59:52.184170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.453 ms 00:34:08.273 [2024-11-21 01:59:52.184179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.273 [2024-11-21 01:59:52.184470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:08.273 [2024-11-21 01:59:52.184484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:08.273 [2024-11-21 01:59:52.184494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:34:08.273 [2024-11-21 01:59:52.184503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.273 [2024-11-21 01:59:52.184531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:08.273 [2024-11-21 01:59:52.184544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:08.273 [2024-11-21 01:59:52.184553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:08.273 [2024-11-21 01:59:52.184560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.273 [2024-11-21 01:59:52.184631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:08.273 [2024-11-21 01:59:52.184642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:08.273 [2024-11-21 01:59:52.184650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:34:08.273 [2024-11-21 01:59:52.184659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.273 [2024-11-21 01:59:52.184673] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:08.273 [2024-11-21 01:59:52.184686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:08.273 [2024-11-21 01:59:52.184696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:08.273 [2024-11-21 01:59:52.184705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:08.273 [2024-11-21 01:59:52.184713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:08.273 [2024-11-21 01:59:52.184721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:08.273 [2024-11-21 01:59:52.184729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:08.273 [2024-11-21 01:59:52.184737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:08.273 [2024-11-21 01:59:52.184744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:08.273 [2024-11-21 01:59:52.184752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:08.273 [2024-11-21 01:59:52.184759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:08.273 [2024-11-21 01:59:52.184767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:08.273 [2024-11-21 01:59:52.184775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:08.273 [2024-11-21 01:59:52.184783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:08.273 [2024-11-21 01:59:52.184790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:08.273 [2024-11-21 01:59:52.184798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:08.273 [2024-11-21 01:59:52.184805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:08.273 [2024-11-21 01:59:52.184812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:08.273 [2024-11-21 01:59:52.184819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:08.273 [2024-11-21 01:59:52.184828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.184996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:08.274 [2024-11-21 01:59:52.185476] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:08.274 [2024-11-21 01:59:52.185484] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 156fbf37-752a-40f3-a42b-1d6f226a350c 00:34:08.274 [2024-11-21 01:59:52.185495] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:08.274 [2024-11-21 01:59:52.185503] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:34:08.274 [2024-11-21 01:59:52.185509] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:08.274 [2024-11-21 01:59:52.185517] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:08.274 [2024-11-21 01:59:52.185525] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:08.274 [2024-11-21 01:59:52.185533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:08.274 [2024-11-21 01:59:52.185540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:08.274 [2024-11-21 01:59:52.185547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:08.274 [2024-11-21 01:59:52.185553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:08.274 [2024-11-21 01:59:52.185560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:08.274 [2024-11-21 01:59:52.185568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:08.274 [2024-11-21 01:59:52.185576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.888 ms 00:34:08.274 [2024-11-21 01:59:52.185583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.274 [2024-11-21 01:59:52.200103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:08.274 [2024-11-21 01:59:52.200154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:08.274 [2024-11-21 01:59:52.200167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.501 ms 00:34:08.274 [2024-11-21 01:59:52.200174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.274 [2024-11-21 01:59:52.200562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:08.274 [2024-11-21 01:59:52.200579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:08.274 [2024-11-21 01:59:52.200588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:34:08.274 [2024-11-21 01:59:52.200603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.535 [2024-11-21 01:59:52.237204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.535 [2024-11-21 01:59:52.237257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:08.535 [2024-11-21 01:59:52.237270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.535 [2024-11-21 01:59:52.237280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.535 [2024-11-21 01:59:52.237352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.535 [2024-11-21 01:59:52.237362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:08.535 [2024-11-21 01:59:52.237373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.535 [2024-11-21 01:59:52.237386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.535 [2024-11-21 01:59:52.237441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.535 [2024-11-21 01:59:52.237452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:08.535 [2024-11-21 01:59:52.237462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.535 [2024-11-21 01:59:52.237471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.535 [2024-11-21 01:59:52.237489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.535 [2024-11-21 01:59:52.237498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:08.535 [2024-11-21 01:59:52.237507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.535 [2024-11-21 01:59:52.237516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.535 [2024-11-21 01:59:52.322228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.535 [2024-11-21 01:59:52.322288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:08.535 [2024-11-21 01:59:52.322302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.535 [2024-11-21 01:59:52.322311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.535 [2024-11-21 01:59:52.392658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.535 [2024-11-21 01:59:52.392721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:08.535 [2024-11-21 01:59:52.392734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.535 [2024-11-21 01:59:52.392743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.535 [2024-11-21 01:59:52.392829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.535 [2024-11-21 01:59:52.392839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:08.535 [2024-11-21 01:59:52.392849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.535 [2024-11-21 01:59:52.392858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.535 [2024-11-21 01:59:52.392902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.535 [2024-11-21 01:59:52.392913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:08.535 [2024-11-21 01:59:52.392921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.535 [2024-11-21 01:59:52.392928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.535 [2024-11-21 01:59:52.393009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.535 [2024-11-21 01:59:52.393020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:08.535 [2024-11-21 01:59:52.393028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.535 [2024-11-21 01:59:52.393036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.535 [2024-11-21 01:59:52.393066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.535 [2024-11-21 01:59:52.393076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:08.535 [2024-11-21 01:59:52.393084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.535 [2024-11-21 01:59:52.393093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.535 [2024-11-21 01:59:52.393132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.535 [2024-11-21 01:59:52.393144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:08.535 [2024-11-21 01:59:52.393153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.535 [2024-11-21 01:59:52.393161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.535 [2024-11-21 01:59:52.393209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:08.535 [2024-11-21 01:59:52.393220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:08.535 [2024-11-21 01:59:52.393228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:08.535 [2024-11-21 01:59:52.393236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:08.535 [2024-11-21 01:59:52.393371] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 213.162 ms, result 0 00:34:09.480 00:34:09.480 00:34:09.480 01:59:53 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:12.030 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:12.030 01:59:55 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:34:12.030 [2024-11-21 01:59:55.518060] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:34:12.030 [2024-11-21 01:59:55.518467] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85611 ] 00:34:12.031 [2024-11-21 01:59:55.679350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:12.031 [2024-11-21 01:59:55.803943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:12.292 [2024-11-21 01:59:56.086265] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:12.292 [2024-11-21 01:59:56.086473] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:12.292 [2024-11-21 01:59:56.244160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.555 [2024-11-21 01:59:56.244376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:12.555 [2024-11-21 01:59:56.244408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:12.555 [2024-11-21 01:59:56.244418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.555 [2024-11-21 01:59:56.244484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.555 [2024-11-21 01:59:56.244496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:12.555 [2024-11-21 01:59:56.244507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:34:12.555 [2024-11-21 01:59:56.244515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.555 [2024-11-21 01:59:56.244537] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:12.555 [2024-11-21 01:59:56.245247] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:12.555 [2024-11-21 01:59:56.245267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.555 [2024-11-21 01:59:56.245276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:12.555 [2024-11-21 01:59:56.245286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:34:12.555 [2024-11-21 01:59:56.245294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.555 [2024-11-21 01:59:56.245576] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:12.555 [2024-11-21 01:59:56.245600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.555 [2024-11-21 01:59:56.245608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:12.555 [2024-11-21 01:59:56.245645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:34:12.555 [2024-11-21 01:59:56.245654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.555 [2024-11-21 01:59:56.245707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.555 [2024-11-21 01:59:56.245716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:12.555 [2024-11-21 01:59:56.245724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:34:12.555 [2024-11-21 01:59:56.245732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.555 [2024-11-21 01:59:56.246038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.555 [2024-11-21 01:59:56.246053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:12.555 [2024-11-21 01:59:56.246062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:34:12.555 [2024-11-21 01:59:56.246069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.555 [2024-11-21 01:59:56.246164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.555 [2024-11-21 01:59:56.246175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:12.555 [2024-11-21 01:59:56.246183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:34:12.555 [2024-11-21 01:59:56.246191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.555 [2024-11-21 01:59:56.246214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.555 [2024-11-21 01:59:56.246223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:12.555 [2024-11-21 01:59:56.246231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:12.555 [2024-11-21 01:59:56.246242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.555 [2024-11-21 01:59:56.246263] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:12.555 [2024-11-21 01:59:56.250481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.555 [2024-11-21 01:59:56.250521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:12.555 [2024-11-21 01:59:56.250532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.222 ms 00:34:12.555 [2024-11-21 01:59:56.250539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.555 [2024-11-21 01:59:56.250572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.555 [2024-11-21 01:59:56.250580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:12.555 [2024-11-21 01:59:56.250589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:34:12.555 [2024-11-21 01:59:56.250596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.555 [2024-11-21 01:59:56.250670] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:12.555 [2024-11-21 01:59:56.250694] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:12.555 [2024-11-21 01:59:56.250733] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:12.555 [2024-11-21 01:59:56.250749] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:12.555 [2024-11-21 01:59:56.250870] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:12.555 [2024-11-21 01:59:56.250882] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:12.555 [2024-11-21 01:59:56.250892] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:12.555 [2024-11-21 01:59:56.250902] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:12.555 [2024-11-21 01:59:56.250912] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:12.555 [2024-11-21 01:59:56.250921] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:12.556 [2024-11-21 01:59:56.250931] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:12.556 [2024-11-21 01:59:56.250939] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:12.556 [2024-11-21 01:59:56.250946] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:12.556 [2024-11-21 01:59:56.250954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.556 [2024-11-21 01:59:56.250961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:12.556 [2024-11-21 01:59:56.250969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:34:12.556 [2024-11-21 01:59:56.250976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.556 [2024-11-21 01:59:56.251063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.556 [2024-11-21 01:59:56.251073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:12.556 [2024-11-21 01:59:56.251080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:34:12.556 [2024-11-21 01:59:56.251090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.556 [2024-11-21 01:59:56.251191] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:12.556 [2024-11-21 01:59:56.251202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:12.556 [2024-11-21 01:59:56.251211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:12.556 [2024-11-21 01:59:56.251219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:12.556 [2024-11-21 01:59:56.251227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:12.556 [2024-11-21 01:59:56.251234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:12.556 [2024-11-21 01:59:56.251241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:12.556 [2024-11-21 01:59:56.251249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:12.556 [2024-11-21 01:59:56.251256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:12.556 [2024-11-21 01:59:56.251263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:12.556 [2024-11-21 01:59:56.251270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:12.556 [2024-11-21 01:59:56.251280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:12.556 [2024-11-21 01:59:56.251287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:12.556 [2024-11-21 01:59:56.251295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:12.556 [2024-11-21 01:59:56.251301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:12.556 [2024-11-21 01:59:56.251308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:12.556 [2024-11-21 01:59:56.251315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:12.556 [2024-11-21 01:59:56.251328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:12.556 [2024-11-21 01:59:56.251334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:12.556 [2024-11-21 01:59:56.251341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:12.556 [2024-11-21 01:59:56.251348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:12.556 [2024-11-21 01:59:56.251354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:12.556 [2024-11-21 01:59:56.251360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:12.556 [2024-11-21 01:59:56.251367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:12.556 [2024-11-21 01:59:56.251373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:12.556 [2024-11-21 01:59:56.251380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:12.556 [2024-11-21 01:59:56.251386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:12.556 [2024-11-21 01:59:56.251392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:12.556 [2024-11-21 01:59:56.251399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:12.556 [2024-11-21 01:59:56.251406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:12.556 [2024-11-21 01:59:56.251412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:12.556 [2024-11-21 01:59:56.251421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:12.556 [2024-11-21 01:59:56.251428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:12.556 [2024-11-21 01:59:56.251435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:12.556 [2024-11-21 01:59:56.251442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:12.556 [2024-11-21 01:59:56.251448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:12.556 [2024-11-21 01:59:56.251455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:12.556 [2024-11-21 01:59:56.251462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:12.556 [2024-11-21 01:59:56.251468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:12.556 [2024-11-21 01:59:56.251475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:12.556 [2024-11-21 01:59:56.251482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:12.556 [2024-11-21 01:59:56.251489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:12.556 [2024-11-21 01:59:56.251497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:12.556 [2024-11-21 01:59:56.251504] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:12.556 [2024-11-21 01:59:56.251513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:12.556 [2024-11-21 01:59:56.251520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:12.556 [2024-11-21 01:59:56.251528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:12.556 [2024-11-21 01:59:56.251535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:12.556 [2024-11-21 01:59:56.251542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:12.556 [2024-11-21 01:59:56.251549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:12.556 [2024-11-21 01:59:56.251556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:12.556 [2024-11-21 01:59:56.251563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:12.556 [2024-11-21 01:59:56.251570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:12.556 [2024-11-21 01:59:56.251577] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:12.556 [2024-11-21 01:59:56.251588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:12.556 [2024-11-21 01:59:56.251597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:12.556 [2024-11-21 01:59:56.251604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:12.556 [2024-11-21 01:59:56.251625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:12.556 [2024-11-21 01:59:56.251633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:12.556 [2024-11-21 01:59:56.251640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:12.556 [2024-11-21 01:59:56.251647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:12.556 [2024-11-21 01:59:56.251655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:12.556 [2024-11-21 01:59:56.251661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:12.556 [2024-11-21 01:59:56.251669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:12.556 [2024-11-21 01:59:56.251677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:12.556 [2024-11-21 01:59:56.251684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:12.556 [2024-11-21 01:59:56.251692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:12.556 [2024-11-21 01:59:56.251699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:12.556 [2024-11-21 01:59:56.251706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:12.556 [2024-11-21 01:59:56.251714] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:12.556 [2024-11-21 01:59:56.251722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:12.556 [2024-11-21 01:59:56.251731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:12.556 [2024-11-21 01:59:56.251738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:12.556 [2024-11-21 01:59:56.251746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:12.556 [2024-11-21 01:59:56.251754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:12.556 [2024-11-21 01:59:56.251762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.556 [2024-11-21 01:59:56.251771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:12.556 [2024-11-21 01:59:56.251779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:34:12.556 [2024-11-21 01:59:56.251786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.556 [2024-11-21 01:59:56.279074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.556 [2024-11-21 01:59:56.279118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:12.556 [2024-11-21 01:59:56.279129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.245 ms 00:34:12.556 [2024-11-21 01:59:56.279138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.556 [2024-11-21 01:59:56.279223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.556 [2024-11-21 01:59:56.279233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:12.556 [2024-11-21 01:59:56.279241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:34:12.556 [2024-11-21 01:59:56.279253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.556 [2024-11-21 01:59:56.328211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.557 [2024-11-21 01:59:56.328410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:12.557 [2024-11-21 01:59:56.328432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.903 ms 00:34:12.557 [2024-11-21 01:59:56.328442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.557 [2024-11-21 01:59:56.328498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.557 [2024-11-21 01:59:56.328509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:12.557 [2024-11-21 01:59:56.328518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:12.557 [2024-11-21 01:59:56.328526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.557 [2024-11-21 01:59:56.328670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.557 [2024-11-21 01:59:56.328683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:12.557 [2024-11-21 01:59:56.328692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:34:12.557 [2024-11-21 01:59:56.328701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.557 [2024-11-21 01:59:56.328831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.557 [2024-11-21 01:59:56.328844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:12.557 [2024-11-21 01:59:56.328852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:34:12.557 [2024-11-21 01:59:56.328860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.557 [2024-11-21 01:59:56.344266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.557 [2024-11-21 01:59:56.344313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:12.557 [2024-11-21 01:59:56.344324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.385 ms 00:34:12.557 [2024-11-21 01:59:56.344333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.557 [2024-11-21 01:59:56.344481] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:12.557 [2024-11-21 01:59:56.344494] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:12.557 [2024-11-21 01:59:56.344504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.557 [2024-11-21 01:59:56.344515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:12.557 [2024-11-21 01:59:56.344524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:34:12.557 [2024-11-21 01:59:56.344532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.557 [2024-11-21 01:59:56.357004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.557 [2024-11-21 01:59:56.357046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:12.557 [2024-11-21 01:59:56.357057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.455 ms 00:34:12.557 [2024-11-21 01:59:56.357066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.557 [2024-11-21 01:59:56.357193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.557 [2024-11-21 01:59:56.357204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:12.557 [2024-11-21 01:59:56.357213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:34:12.557 [2024-11-21 01:59:56.357226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.557 [2024-11-21 01:59:56.357275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.557 [2024-11-21 01:59:56.357286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:12.557 [2024-11-21 01:59:56.357296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:34:12.557 [2024-11-21 01:59:56.357304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.557 [2024-11-21 01:59:56.357995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.557 [2024-11-21 01:59:56.358023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:12.557 [2024-11-21 01:59:56.358033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:34:12.557 [2024-11-21 01:59:56.358041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.557 [2024-11-21 01:59:56.358060] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:12.557 [2024-11-21 01:59:56.358072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.557 [2024-11-21 01:59:56.358080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:12.557 [2024-11-21 01:59:56.358100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:34:12.557 [2024-11-21 01:59:56.358108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.557 [2024-11-21 01:59:56.370529] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:12.557 [2024-11-21 01:59:56.370844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.557 [2024-11-21 01:59:56.370862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:12.557 [2024-11-21 01:59:56.370871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.715 ms 00:34:12.557 [2024-11-21 01:59:56.370880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.557 [2024-11-21 01:59:56.372966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.557 [2024-11-21 01:59:56.372998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:12.557 [2024-11-21 01:59:56.373008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.063 ms 00:34:12.557 [2024-11-21 01:59:56.373015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.557 [2024-11-21 01:59:56.373109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.557 [2024-11-21 01:59:56.373119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:12.557 [2024-11-21 01:59:56.373128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:34:12.557 [2024-11-21 01:59:56.373136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.557 [2024-11-21 01:59:56.373159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.557 [2024-11-21 01:59:56.373168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:12.557 [2024-11-21 01:59:56.373180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:12.557 [2024-11-21 01:59:56.373188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.557 [2024-11-21 01:59:56.373217] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:12.557 [2024-11-21 01:59:56.373227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.557 [2024-11-21 01:59:56.373235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:12.557 [2024-11-21 01:59:56.373243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:12.557 [2024-11-21 01:59:56.373250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.557 [2024-11-21 01:59:56.399345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.557 [2024-11-21 01:59:56.399402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:12.557 [2024-11-21 01:59:56.399415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.075 ms 00:34:12.557 [2024-11-21 01:59:56.399423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.557 [2024-11-21 01:59:56.399510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.557 [2024-11-21 01:59:56.399520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:12.557 [2024-11-21 01:59:56.399530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:34:12.557 [2024-11-21 01:59:56.399538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.557 [2024-11-21 01:59:56.400757] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 156.113 ms, result 0 00:34:13.502  [2024-11-21T01:59:58.847Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-21T01:59:59.418Z] Copying: 26/1024 [MB] (15 MBps) [2024-11-21T02:00:00.805Z] Copying: 36/1024 [MB] (10 MBps) [2024-11-21T02:00:01.748Z] Copying: 50/1024 [MB] (13 MBps) [2024-11-21T02:00:02.736Z] Copying: 77/1024 [MB] (26 MBps) [2024-11-21T02:00:03.753Z] Copying: 100/1024 [MB] (23 MBps) [2024-11-21T02:00:04.698Z] Copying: 114/1024 [MB] (14 MBps) [2024-11-21T02:00:05.639Z] Copying: 127/1024 [MB] (12 MBps) [2024-11-21T02:00:06.581Z] Copying: 137/1024 [MB] (10 MBps) [2024-11-21T02:00:07.522Z] Copying: 148/1024 [MB] (10 MBps) [2024-11-21T02:00:08.465Z] Copying: 158/1024 [MB] (10 MBps) [2024-11-21T02:00:09.849Z] Copying: 179/1024 [MB] (20 MBps) [2024-11-21T02:00:10.422Z] Copying: 202/1024 [MB] (22 MBps) [2024-11-21T02:00:11.809Z] Copying: 221/1024 [MB] (18 MBps) [2024-11-21T02:00:12.753Z] Copying: 236/1024 [MB] (15 MBps) [2024-11-21T02:00:13.697Z] Copying: 256/1024 [MB] (19 MBps) [2024-11-21T02:00:14.641Z] Copying: 270/1024 [MB] (14 MBps) [2024-11-21T02:00:15.585Z] Copying: 286/1024 [MB] (15 MBps) [2024-11-21T02:00:16.530Z] Copying: 296/1024 [MB] (10 MBps) [2024-11-21T02:00:17.473Z] Copying: 309/1024 [MB] (12 MBps) [2024-11-21T02:00:18.417Z] Copying: 337/1024 [MB] (28 MBps) [2024-11-21T02:00:19.805Z] Copying: 381/1024 [MB] (43 MBps) [2024-11-21T02:00:20.749Z] Copying: 417/1024 [MB] (36 MBps) [2024-11-21T02:00:21.694Z] Copying: 452/1024 [MB] (35 MBps) [2024-11-21T02:00:22.640Z] Copying: 478/1024 [MB] (25 MBps) [2024-11-21T02:00:23.584Z] Copying: 499/1024 [MB] (21 MBps) [2024-11-21T02:00:24.529Z] Copying: 510/1024 [MB] (10 MBps) [2024-11-21T02:00:25.468Z] Copying: 520/1024 [MB] (10 MBps) [2024-11-21T02:00:26.852Z] Copying: 538/1024 [MB] (17 MBps) [2024-11-21T02:00:27.454Z] Copying: 560/1024 [MB] (22 MBps) [2024-11-21T02:00:28.841Z] Copying: 578/1024 [MB] (17 MBps) [2024-11-21T02:00:29.782Z] Copying: 588/1024 [MB] (10 MBps) [2024-11-21T02:00:30.725Z] Copying: 610/1024 [MB] (21 MBps) [2024-11-21T02:00:31.671Z] Copying: 624/1024 [MB] (13 MBps) [2024-11-21T02:00:32.615Z] Copying: 638/1024 [MB] (14 MBps) [2024-11-21T02:00:33.561Z] Copying: 656/1024 [MB] (18 MBps) [2024-11-21T02:00:34.505Z] Copying: 667/1024 [MB] (10 MBps) [2024-11-21T02:00:35.451Z] Copying: 678/1024 [MB] (10 MBps) [2024-11-21T02:00:36.840Z] Copying: 688/1024 [MB] (10 MBps) [2024-11-21T02:00:37.786Z] Copying: 712/1024 [MB] (23 MBps) [2024-11-21T02:00:38.730Z] Copying: 732/1024 [MB] (20 MBps) [2024-11-21T02:00:39.686Z] Copying: 757/1024 [MB] (24 MBps) [2024-11-21T02:00:40.631Z] Copying: 787/1024 [MB] (29 MBps) [2024-11-21T02:00:41.575Z] Copying: 807/1024 [MB] (20 MBps) [2024-11-21T02:00:42.520Z] Copying: 821/1024 [MB] (13 MBps) [2024-11-21T02:00:43.464Z] Copying: 832/1024 [MB] (11 MBps) [2024-11-21T02:00:44.853Z] Copying: 854/1024 [MB] (21 MBps) [2024-11-21T02:00:45.427Z] Copying: 867/1024 [MB] (12 MBps) [2024-11-21T02:00:46.813Z] Copying: 878/1024 [MB] (11 MBps) [2024-11-21T02:00:47.756Z] Copying: 911/1024 [MB] (32 MBps) [2024-11-21T02:00:48.700Z] Copying: 952/1024 [MB] (41 MBps) [2024-11-21T02:00:49.643Z] Copying: 969/1024 [MB] (17 MBps) [2024-11-21T02:00:50.587Z] Copying: 987/1024 [MB] (17 MBps) [2024-11-21T02:00:51.532Z] Copying: 1001/1024 [MB] (13 MBps) [2024-11-21T02:00:52.473Z] Copying: 1017/1024 [MB] (15 MBps) [2024-11-21T02:00:52.473Z] Copying: 1048520/1048576 [kB] (6776 kBps) [2024-11-21T02:00:52.473Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-21 02:00:52.468762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.516 [2024-11-21 02:00:52.468805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:08.516 [2024-11-21 02:00:52.468817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:08.516 [2024-11-21 02:00:52.468824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.779 [2024-11-21 02:00:52.471129] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:08.779 [2024-11-21 02:00:52.474250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.779 [2024-11-21 02:00:52.474275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:08.779 [2024-11-21 02:00:52.474284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.089 ms 00:35:08.779 [2024-11-21 02:00:52.474291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.779 [2024-11-21 02:00:52.482336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.779 [2024-11-21 02:00:52.482366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:08.779 [2024-11-21 02:00:52.482374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.366 ms 00:35:08.779 [2024-11-21 02:00:52.482380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.779 [2024-11-21 02:00:52.482400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.779 [2024-11-21 02:00:52.482407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:08.779 [2024-11-21 02:00:52.482414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:08.779 [2024-11-21 02:00:52.482420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.779 [2024-11-21 02:00:52.482457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.779 [2024-11-21 02:00:52.482464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:08.779 [2024-11-21 02:00:52.482472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:08.779 [2024-11-21 02:00:52.482478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.779 [2024-11-21 02:00:52.482489] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:08.779 [2024-11-21 02:00:52.482499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128512 / 261120 wr_cnt: 1 state: open 00:35:08.779 [2024-11-21 02:00:52.482506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:08.779 [2024-11-21 02:00:52.482512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:08.779 [2024-11-21 02:00:52.482518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:08.779 [2024-11-21 02:00:52.482524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:08.779 [2024-11-21 02:00:52.482530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:08.779 [2024-11-21 02:00:52.482536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:08.779 [2024-11-21 02:00:52.482542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:08.779 [2024-11-21 02:00:52.482548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:08.779 [2024-11-21 02:00:52.482554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:08.779 [2024-11-21 02:00:52.482560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:08.779 [2024-11-21 02:00:52.482565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:08.779 [2024-11-21 02:00:52.482572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.482996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.483002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.483008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.483014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.483020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.483025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.483031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.483036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.483042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.483048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.483054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.483060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.483066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.483072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.483078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.483084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.483090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.483096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.483101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.483107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:08.780 [2024-11-21 02:00:52.483119] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:08.780 [2024-11-21 02:00:52.483125] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 156fbf37-752a-40f3-a42b-1d6f226a350c 00:35:08.780 [2024-11-21 02:00:52.483131] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128512 00:35:08.781 [2024-11-21 02:00:52.483136] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128544 00:35:08.781 [2024-11-21 02:00:52.483141] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128512 00:35:08.781 [2024-11-21 02:00:52.483147] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:35:08.781 [2024-11-21 02:00:52.483153] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:08.781 [2024-11-21 02:00:52.483159] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:08.781 [2024-11-21 02:00:52.483166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:08.781 [2024-11-21 02:00:52.483172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:08.781 [2024-11-21 02:00:52.483176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:08.781 [2024-11-21 02:00:52.483182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.781 [2024-11-21 02:00:52.483187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:08.781 [2024-11-21 02:00:52.483193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:35:08.781 [2024-11-21 02:00:52.483199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.781 [2024-11-21 02:00:52.493034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.781 [2024-11-21 02:00:52.493058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:08.781 [2024-11-21 02:00:52.493065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.824 ms 00:35:08.781 [2024-11-21 02:00:52.493074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.781 [2024-11-21 02:00:52.493334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:08.781 [2024-11-21 02:00:52.493341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:08.781 [2024-11-21 02:00:52.493347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:35:08.781 [2024-11-21 02:00:52.493353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.781 [2024-11-21 02:00:52.518891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.781 [2024-11-21 02:00:52.518917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:08.781 [2024-11-21 02:00:52.518926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.781 [2024-11-21 02:00:52.518931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.781 [2024-11-21 02:00:52.518972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.781 [2024-11-21 02:00:52.518978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:08.781 [2024-11-21 02:00:52.518984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.781 [2024-11-21 02:00:52.518990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.781 [2024-11-21 02:00:52.519025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.781 [2024-11-21 02:00:52.519032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:08.781 [2024-11-21 02:00:52.519038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.781 [2024-11-21 02:00:52.519045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.781 [2024-11-21 02:00:52.519057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.781 [2024-11-21 02:00:52.519063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:08.781 [2024-11-21 02:00:52.519068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.781 [2024-11-21 02:00:52.519074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.781 [2024-11-21 02:00:52.577654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.781 [2024-11-21 02:00:52.577684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:08.781 [2024-11-21 02:00:52.577696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.781 [2024-11-21 02:00:52.577702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.781 [2024-11-21 02:00:52.625194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.781 [2024-11-21 02:00:52.625228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:08.781 [2024-11-21 02:00:52.625239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.781 [2024-11-21 02:00:52.625246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.781 [2024-11-21 02:00:52.625298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.781 [2024-11-21 02:00:52.625307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:08.781 [2024-11-21 02:00:52.625313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.781 [2024-11-21 02:00:52.625320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.781 [2024-11-21 02:00:52.625348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.781 [2024-11-21 02:00:52.625354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:08.781 [2024-11-21 02:00:52.625360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.781 [2024-11-21 02:00:52.625366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.781 [2024-11-21 02:00:52.625420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.781 [2024-11-21 02:00:52.625426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:08.781 [2024-11-21 02:00:52.625432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.781 [2024-11-21 02:00:52.625438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.781 [2024-11-21 02:00:52.625459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.781 [2024-11-21 02:00:52.625466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:08.781 [2024-11-21 02:00:52.625471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.781 [2024-11-21 02:00:52.625477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.781 [2024-11-21 02:00:52.625503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.781 [2024-11-21 02:00:52.625509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:08.781 [2024-11-21 02:00:52.625514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.781 [2024-11-21 02:00:52.625520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.781 [2024-11-21 02:00:52.625552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:08.781 [2024-11-21 02:00:52.625560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:08.781 [2024-11-21 02:00:52.625565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:08.781 [2024-11-21 02:00:52.625571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:08.781 [2024-11-21 02:00:52.625676] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 158.676 ms, result 0 00:35:10.166 00:35:10.166 00:35:10.166 02:00:53 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:35:10.166 [2024-11-21 02:00:54.007462] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:35:10.167 [2024-11-21 02:00:54.007592] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86191 ] 00:35:10.427 [2024-11-21 02:00:54.164408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:10.427 [2024-11-21 02:00:54.240124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.690 [2024-11-21 02:00:54.444253] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:10.690 [2024-11-21 02:00:54.444297] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:10.690 [2024-11-21 02:00:54.591435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.690 [2024-11-21 02:00:54.591581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:10.690 [2024-11-21 02:00:54.591601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:10.690 [2024-11-21 02:00:54.591608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.690 [2024-11-21 02:00:54.591662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.690 [2024-11-21 02:00:54.591670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:10.690 [2024-11-21 02:00:54.591678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:35:10.690 [2024-11-21 02:00:54.591684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.690 [2024-11-21 02:00:54.591699] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:10.690 [2024-11-21 02:00:54.592201] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:10.690 [2024-11-21 02:00:54.592212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.690 [2024-11-21 02:00:54.592217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:10.690 [2024-11-21 02:00:54.592224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:35:10.690 [2024-11-21 02:00:54.592230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.690 [2024-11-21 02:00:54.592425] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:35:10.690 [2024-11-21 02:00:54.592442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.690 [2024-11-21 02:00:54.592447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:10.690 [2024-11-21 02:00:54.592456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:35:10.690 [2024-11-21 02:00:54.592461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.690 [2024-11-21 02:00:54.592492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.690 [2024-11-21 02:00:54.592499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:10.690 [2024-11-21 02:00:54.592504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:35:10.690 [2024-11-21 02:00:54.592510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.690 [2024-11-21 02:00:54.592711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.690 [2024-11-21 02:00:54.592721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:10.690 [2024-11-21 02:00:54.592727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:35:10.690 [2024-11-21 02:00:54.592732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.690 [2024-11-21 02:00:54.592781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.690 [2024-11-21 02:00:54.592788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:10.690 [2024-11-21 02:00:54.592793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:35:10.690 [2024-11-21 02:00:54.592799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.690 [2024-11-21 02:00:54.592814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.690 [2024-11-21 02:00:54.592820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:10.690 [2024-11-21 02:00:54.592825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:10.690 [2024-11-21 02:00:54.592832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.690 [2024-11-21 02:00:54.592845] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:10.690 [2024-11-21 02:00:54.595604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.690 [2024-11-21 02:00:54.595635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:10.690 [2024-11-21 02:00:54.595642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.762 ms 00:35:10.690 [2024-11-21 02:00:54.595648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.690 [2024-11-21 02:00:54.595669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.690 [2024-11-21 02:00:54.595675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:10.690 [2024-11-21 02:00:54.595681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:10.690 [2024-11-21 02:00:54.595686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.690 [2024-11-21 02:00:54.595719] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:10.690 [2024-11-21 02:00:54.595735] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:10.690 [2024-11-21 02:00:54.595762] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:10.690 [2024-11-21 02:00:54.595773] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:10.690 [2024-11-21 02:00:54.595850] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:10.690 [2024-11-21 02:00:54.595858] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:10.690 [2024-11-21 02:00:54.595865] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:10.690 [2024-11-21 02:00:54.595873] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:10.690 [2024-11-21 02:00:54.595880] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:10.690 [2024-11-21 02:00:54.595885] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:10.690 [2024-11-21 02:00:54.595893] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:10.690 [2024-11-21 02:00:54.595899] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:10.690 [2024-11-21 02:00:54.595904] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:10.690 [2024-11-21 02:00:54.595910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.690 [2024-11-21 02:00:54.595916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:10.690 [2024-11-21 02:00:54.595921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:35:10.690 [2024-11-21 02:00:54.595927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.690 [2024-11-21 02:00:54.595989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.690 [2024-11-21 02:00:54.595996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:10.690 [2024-11-21 02:00:54.596001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:35:10.690 [2024-11-21 02:00:54.596008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.690 [2024-11-21 02:00:54.596083] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:10.690 [2024-11-21 02:00:54.596091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:10.690 [2024-11-21 02:00:54.596097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:10.690 [2024-11-21 02:00:54.596103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:10.690 [2024-11-21 02:00:54.596109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:10.690 [2024-11-21 02:00:54.596114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:10.690 [2024-11-21 02:00:54.596119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:10.690 [2024-11-21 02:00:54.596125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:10.690 [2024-11-21 02:00:54.596131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:10.690 [2024-11-21 02:00:54.596136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:10.690 [2024-11-21 02:00:54.596142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:10.690 [2024-11-21 02:00:54.596147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:10.691 [2024-11-21 02:00:54.596152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:10.691 [2024-11-21 02:00:54.596157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:10.691 [2024-11-21 02:00:54.596163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:10.691 [2024-11-21 02:00:54.596168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:10.691 [2024-11-21 02:00:54.596173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:10.691 [2024-11-21 02:00:54.596181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:10.691 [2024-11-21 02:00:54.596187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:10.691 [2024-11-21 02:00:54.596192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:10.691 [2024-11-21 02:00:54.596196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:10.691 [2024-11-21 02:00:54.596201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:10.691 [2024-11-21 02:00:54.596206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:10.691 [2024-11-21 02:00:54.596211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:10.691 [2024-11-21 02:00:54.596216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:10.691 [2024-11-21 02:00:54.596221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:10.691 [2024-11-21 02:00:54.596226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:10.691 [2024-11-21 02:00:54.596231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:10.691 [2024-11-21 02:00:54.596236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:10.691 [2024-11-21 02:00:54.596240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:10.691 [2024-11-21 02:00:54.596245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:10.691 [2024-11-21 02:00:54.596250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:10.691 [2024-11-21 02:00:54.596255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:10.691 [2024-11-21 02:00:54.596260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:10.691 [2024-11-21 02:00:54.596264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:10.691 [2024-11-21 02:00:54.596269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:10.691 [2024-11-21 02:00:54.596274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:10.691 [2024-11-21 02:00:54.596279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:10.691 [2024-11-21 02:00:54.596283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:10.691 [2024-11-21 02:00:54.596288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:10.691 [2024-11-21 02:00:54.596292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:10.691 [2024-11-21 02:00:54.596297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:10.691 [2024-11-21 02:00:54.596303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:10.691 [2024-11-21 02:00:54.596307] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:10.691 [2024-11-21 02:00:54.596314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:10.691 [2024-11-21 02:00:54.596320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:10.691 [2024-11-21 02:00:54.596325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:10.691 [2024-11-21 02:00:54.596331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:10.691 [2024-11-21 02:00:54.596336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:10.691 [2024-11-21 02:00:54.596341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:10.691 [2024-11-21 02:00:54.596346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:10.691 [2024-11-21 02:00:54.596351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:10.691 [2024-11-21 02:00:54.596356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:10.691 [2024-11-21 02:00:54.596361] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:10.691 [2024-11-21 02:00:54.596369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:10.691 [2024-11-21 02:00:54.596376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:10.691 [2024-11-21 02:00:54.596381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:10.691 [2024-11-21 02:00:54.596387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:10.691 [2024-11-21 02:00:54.596392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:10.691 [2024-11-21 02:00:54.596397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:10.691 [2024-11-21 02:00:54.596402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:10.691 [2024-11-21 02:00:54.596407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:10.691 [2024-11-21 02:00:54.596412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:10.691 [2024-11-21 02:00:54.596418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:10.691 [2024-11-21 02:00:54.596423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:10.691 [2024-11-21 02:00:54.596428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:10.691 [2024-11-21 02:00:54.596433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:10.691 [2024-11-21 02:00:54.596438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:10.691 [2024-11-21 02:00:54.596444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:10.691 [2024-11-21 02:00:54.596449] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:10.691 [2024-11-21 02:00:54.596455] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:10.691 [2024-11-21 02:00:54.596461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:10.691 [2024-11-21 02:00:54.596467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:10.691 [2024-11-21 02:00:54.596473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:10.691 [2024-11-21 02:00:54.596478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:10.691 [2024-11-21 02:00:54.596484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.691 [2024-11-21 02:00:54.596489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:10.691 [2024-11-21 02:00:54.596495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:35:10.691 [2024-11-21 02:00:54.596501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.691 [2024-11-21 02:00:54.614757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.691 [2024-11-21 02:00:54.614781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:10.691 [2024-11-21 02:00:54.614788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.227 ms 00:35:10.691 [2024-11-21 02:00:54.614794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.691 [2024-11-21 02:00:54.614852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.691 [2024-11-21 02:00:54.614858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:10.691 [2024-11-21 02:00:54.614864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:35:10.691 [2024-11-21 02:00:54.614872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.953 [2024-11-21 02:00:54.659298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.953 [2024-11-21 02:00:54.659327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:10.953 [2024-11-21 02:00:54.659336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.390 ms 00:35:10.953 [2024-11-21 02:00:54.659343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.953 [2024-11-21 02:00:54.659368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.953 [2024-11-21 02:00:54.659375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:10.953 [2024-11-21 02:00:54.659382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:10.953 [2024-11-21 02:00:54.659387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.953 [2024-11-21 02:00:54.659462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.953 [2024-11-21 02:00:54.659471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:10.953 [2024-11-21 02:00:54.659477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:35:10.953 [2024-11-21 02:00:54.659483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.953 [2024-11-21 02:00:54.659569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.953 [2024-11-21 02:00:54.659577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:10.953 [2024-11-21 02:00:54.659583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:35:10.953 [2024-11-21 02:00:54.659588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.953 [2024-11-21 02:00:54.669935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.953 [2024-11-21 02:00:54.670051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:10.953 [2024-11-21 02:00:54.670064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.333 ms 00:35:10.953 [2024-11-21 02:00:54.670070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.953 [2024-11-21 02:00:54.670154] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:35:10.953 [2024-11-21 02:00:54.670164] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:10.953 [2024-11-21 02:00:54.670172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.953 [2024-11-21 02:00:54.670178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:10.953 [2024-11-21 02:00:54.670186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:35:10.953 [2024-11-21 02:00:54.670192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.953 [2024-11-21 02:00:54.679314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.953 [2024-11-21 02:00:54.679336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:10.953 [2024-11-21 02:00:54.679343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.111 ms 00:35:10.953 [2024-11-21 02:00:54.679349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.953 [2024-11-21 02:00:54.679434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.953 [2024-11-21 02:00:54.679441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:10.953 [2024-11-21 02:00:54.679447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:35:10.953 [2024-11-21 02:00:54.679453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.953 [2024-11-21 02:00:54.679478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.953 [2024-11-21 02:00:54.679485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:10.953 [2024-11-21 02:00:54.679492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:35:10.953 [2024-11-21 02:00:54.679498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.953 [2024-11-21 02:00:54.679943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.953 [2024-11-21 02:00:54.679953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:10.953 [2024-11-21 02:00:54.679959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:35:10.953 [2024-11-21 02:00:54.679965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.953 [2024-11-21 02:00:54.679977] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:35:10.953 [2024-11-21 02:00:54.679986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.953 [2024-11-21 02:00:54.679992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:10.953 [2024-11-21 02:00:54.679998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:35:10.953 [2024-11-21 02:00:54.680003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.953 [2024-11-21 02:00:54.688845] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:10.953 [2024-11-21 02:00:54.689023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.953 [2024-11-21 02:00:54.689034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:10.953 [2024-11-21 02:00:54.689041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.006 ms 00:35:10.953 [2024-11-21 02:00:54.689047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.953 [2024-11-21 02:00:54.690681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.953 [2024-11-21 02:00:54.690701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:10.953 [2024-11-21 02:00:54.690710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.619 ms 00:35:10.953 [2024-11-21 02:00:54.690716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.953 [2024-11-21 02:00:54.690772] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:35:10.953 [2024-11-21 02:00:54.691116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.953 [2024-11-21 02:00:54.691122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:10.953 [2024-11-21 02:00:54.691129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:35:10.953 [2024-11-21 02:00:54.691134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.953 [2024-11-21 02:00:54.691150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.953 [2024-11-21 02:00:54.691160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:10.953 [2024-11-21 02:00:54.691165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:10.953 [2024-11-21 02:00:54.691171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.953 [2024-11-21 02:00:54.691192] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:10.953 [2024-11-21 02:00:54.691199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.953 [2024-11-21 02:00:54.691205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:10.954 [2024-11-21 02:00:54.691211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:10.954 [2024-11-21 02:00:54.691216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.954 [2024-11-21 02:00:54.709476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.954 [2024-11-21 02:00:54.709502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:10.954 [2024-11-21 02:00:54.709510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.247 ms 00:35:10.954 [2024-11-21 02:00:54.709516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.954 [2024-11-21 02:00:54.709568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:10.954 [2024-11-21 02:00:54.709576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:10.954 [2024-11-21 02:00:54.709582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:35:10.954 [2024-11-21 02:00:54.709588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:10.954 [2024-11-21 02:00:54.710330] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 118.598 ms, result 0 00:35:12.341  [2024-11-21T02:00:57.242Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-21T02:00:58.187Z] Copying: 43/1024 [MB] (22 MBps) [2024-11-21T02:00:59.131Z] Copying: 60/1024 [MB] (17 MBps) [2024-11-21T02:01:00.075Z] Copying: 73/1024 [MB] (12 MBps) [2024-11-21T02:01:01.020Z] Copying: 90/1024 [MB] (17 MBps) [2024-11-21T02:01:02.009Z] Copying: 102/1024 [MB] (11 MBps) [2024-11-21T02:01:02.985Z] Copying: 116/1024 [MB] (14 MBps) [2024-11-21T02:01:03.930Z] Copying: 141/1024 [MB] (24 MBps) [2024-11-21T02:01:05.318Z] Copying: 152/1024 [MB] (10 MBps) [2024-11-21T02:01:06.263Z] Copying: 168/1024 [MB] (15 MBps) [2024-11-21T02:01:07.202Z] Copying: 188/1024 [MB] (20 MBps) [2024-11-21T02:01:08.144Z] Copying: 204/1024 [MB] (16 MBps) [2024-11-21T02:01:09.088Z] Copying: 219/1024 [MB] (15 MBps) [2024-11-21T02:01:10.027Z] Copying: 233/1024 [MB] (14 MBps) [2024-11-21T02:01:10.967Z] Copying: 248/1024 [MB] (14 MBps) [2024-11-21T02:01:11.911Z] Copying: 269/1024 [MB] (21 MBps) [2024-11-21T02:01:13.289Z] Copying: 283/1024 [MB] (14 MBps) [2024-11-21T02:01:14.232Z] Copying: 295/1024 [MB] (11 MBps) [2024-11-21T02:01:15.175Z] Copying: 313/1024 [MB] (18 MBps) [2024-11-21T02:01:16.116Z] Copying: 329/1024 [MB] (15 MBps) [2024-11-21T02:01:17.058Z] Copying: 342/1024 [MB] (12 MBps) [2024-11-21T02:01:17.999Z] Copying: 358/1024 [MB] (16 MBps) [2024-11-21T02:01:18.945Z] Copying: 372/1024 [MB] (13 MBps) [2024-11-21T02:01:20.331Z] Copying: 383/1024 [MB] (11 MBps) [2024-11-21T02:01:21.270Z] Copying: 400/1024 [MB] (16 MBps) [2024-11-21T02:01:22.213Z] Copying: 416/1024 [MB] (16 MBps) [2024-11-21T02:01:23.158Z] Copying: 435/1024 [MB] (18 MBps) [2024-11-21T02:01:24.102Z] Copying: 450/1024 [MB] (15 MBps) [2024-11-21T02:01:25.038Z] Copying: 464/1024 [MB] (13 MBps) [2024-11-21T02:01:25.981Z] Copying: 477/1024 [MB] (12 MBps) [2024-11-21T02:01:26.923Z] Copying: 493/1024 [MB] (15 MBps) [2024-11-21T02:01:28.311Z] Copying: 508/1024 [MB] (15 MBps) [2024-11-21T02:01:29.254Z] Copying: 520/1024 [MB] (12 MBps) [2024-11-21T02:01:30.197Z] Copying: 531/1024 [MB] (11 MBps) [2024-11-21T02:01:31.141Z] Copying: 546/1024 [MB] (14 MBps) [2024-11-21T02:01:32.089Z] Copying: 558/1024 [MB] (12 MBps) [2024-11-21T02:01:33.115Z] Copying: 572/1024 [MB] (13 MBps) [2024-11-21T02:01:34.059Z] Copying: 591/1024 [MB] (18 MBps) [2024-11-21T02:01:35.001Z] Copying: 604/1024 [MB] (13 MBps) [2024-11-21T02:01:35.944Z] Copying: 618/1024 [MB] (13 MBps) [2024-11-21T02:01:37.329Z] Copying: 630/1024 [MB] (12 MBps) [2024-11-21T02:01:38.272Z] Copying: 644/1024 [MB] (13 MBps) [2024-11-21T02:01:39.217Z] Copying: 656/1024 [MB] (12 MBps) [2024-11-21T02:01:40.160Z] Copying: 682/1024 [MB] (25 MBps) [2024-11-21T02:01:41.104Z] Copying: 707/1024 [MB] (25 MBps) [2024-11-21T02:01:42.048Z] Copying: 728/1024 [MB] (20 MBps) [2024-11-21T02:01:42.992Z] Copying: 749/1024 [MB] (21 MBps) [2024-11-21T02:01:43.939Z] Copying: 773/1024 [MB] (23 MBps) [2024-11-21T02:01:45.326Z] Copying: 792/1024 [MB] (19 MBps) [2024-11-21T02:01:46.272Z] Copying: 814/1024 [MB] (22 MBps) [2024-11-21T02:01:47.216Z] Copying: 829/1024 [MB] (14 MBps) [2024-11-21T02:01:48.158Z] Copying: 849/1024 [MB] (20 MBps) [2024-11-21T02:01:49.103Z] Copying: 870/1024 [MB] (20 MBps) [2024-11-21T02:01:50.065Z] Copying: 886/1024 [MB] (16 MBps) [2024-11-21T02:01:51.007Z] Copying: 912/1024 [MB] (25 MBps) [2024-11-21T02:01:51.951Z] Copying: 930/1024 [MB] (18 MBps) [2024-11-21T02:01:53.336Z] Copying: 941/1024 [MB] (10 MBps) [2024-11-21T02:01:54.280Z] Copying: 956/1024 [MB] (15 MBps) [2024-11-21T02:01:55.222Z] Copying: 971/1024 [MB] (14 MBps) [2024-11-21T02:01:56.165Z] Copying: 988/1024 [MB] (16 MBps) [2024-11-21T02:01:57.108Z] Copying: 998/1024 [MB] (10 MBps) [2024-11-21T02:01:58.050Z] Copying: 1009/1024 [MB] (10 MBps) [2024-11-21T02:01:58.311Z] Copying: 1020/1024 [MB] (10 MBps) [2024-11-21T02:01:58.573Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-21 02:01:58.537331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.616 [2024-11-21 02:01:58.537430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:14.616 [2024-11-21 02:01:58.537449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:14.616 [2024-11-21 02:01:58.537459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.616 [2024-11-21 02:01:58.537485] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:14.616 [2024-11-21 02:01:58.542730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.616 [2024-11-21 02:01:58.542787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:14.616 [2024-11-21 02:01:58.542803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.225 ms 00:36:14.616 [2024-11-21 02:01:58.542815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.616 [2024-11-21 02:01:58.543154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.616 [2024-11-21 02:01:58.543177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:14.616 [2024-11-21 02:01:58.543191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:36:14.616 [2024-11-21 02:01:58.543202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.616 [2024-11-21 02:01:58.543240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.616 [2024-11-21 02:01:58.543253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:36:14.616 [2024-11-21 02:01:58.543265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:36:14.616 [2024-11-21 02:01:58.543276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.616 [2024-11-21 02:01:58.543349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.616 [2024-11-21 02:01:58.543361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:36:14.616 [2024-11-21 02:01:58.543377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:36:14.616 [2024-11-21 02:01:58.543394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.616 [2024-11-21 02:01:58.543415] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:14.616 [2024-11-21 02:01:58.543432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:36:14.616 [2024-11-21 02:01:58.543446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:14.616 [2024-11-21 02:01:58.543458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:14.616 [2024-11-21 02:01:58.543470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:14.616 [2024-11-21 02:01:58.543482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:14.616 [2024-11-21 02:01:58.543494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:14.616 [2024-11-21 02:01:58.543505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:14.616 [2024-11-21 02:01:58.543516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:14.616 [2024-11-21 02:01:58.543528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:14.616 [2024-11-21 02:01:58.543539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:14.616 [2024-11-21 02:01:58.543551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:14.616 [2024-11-21 02:01:58.543562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.543985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:14.617 [2024-11-21 02:01:58.544598] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:14.618 [2024-11-21 02:01:58.544623] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 156fbf37-752a-40f3-a42b-1d6f226a350c 00:36:14.618 [2024-11-21 02:01:58.544636] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:36:14.618 [2024-11-21 02:01:58.544647] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2592 00:36:14.618 [2024-11-21 02:01:58.544658] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2560 00:36:14.618 [2024-11-21 02:01:58.544669] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:36:14.618 [2024-11-21 02:01:58.544680] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:14.618 [2024-11-21 02:01:58.544695] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:14.618 [2024-11-21 02:01:58.544706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:14.618 [2024-11-21 02:01:58.544715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:14.618 [2024-11-21 02:01:58.544725] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:14.618 [2024-11-21 02:01:58.544736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.618 [2024-11-21 02:01:58.544747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:14.618 [2024-11-21 02:01:58.544759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.322 ms 00:36:14.618 [2024-11-21 02:01:58.544770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.618 [2024-11-21 02:01:58.560191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.618 [2024-11-21 02:01:58.560242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:14.618 [2024-11-21 02:01:58.560255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.399 ms 00:36:14.618 [2024-11-21 02:01:58.560270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.618 [2024-11-21 02:01:58.560707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:14.618 [2024-11-21 02:01:58.560724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:14.618 [2024-11-21 02:01:58.560735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:36:14.618 [2024-11-21 02:01:58.560744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.879 [2024-11-21 02:01:58.597799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:14.879 [2024-11-21 02:01:58.597853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:14.879 [2024-11-21 02:01:58.597865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:14.879 [2024-11-21 02:01:58.597875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.879 [2024-11-21 02:01:58.597948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:14.879 [2024-11-21 02:01:58.597958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:14.879 [2024-11-21 02:01:58.597969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:14.879 [2024-11-21 02:01:58.597978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.879 [2024-11-21 02:01:58.598042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:14.879 [2024-11-21 02:01:58.598054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:14.879 [2024-11-21 02:01:58.598067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:14.879 [2024-11-21 02:01:58.598077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.879 [2024-11-21 02:01:58.598095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:14.879 [2024-11-21 02:01:58.598103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:14.879 [2024-11-21 02:01:58.598112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:14.879 [2024-11-21 02:01:58.598120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.879 [2024-11-21 02:01:58.685035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:14.879 [2024-11-21 02:01:58.685274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:14.879 [2024-11-21 02:01:58.685297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:14.879 [2024-11-21 02:01:58.685306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.879 [2024-11-21 02:01:58.755562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:14.879 [2024-11-21 02:01:58.755660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:14.879 [2024-11-21 02:01:58.755675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:14.879 [2024-11-21 02:01:58.755684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.879 [2024-11-21 02:01:58.755773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:14.879 [2024-11-21 02:01:58.755784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:14.879 [2024-11-21 02:01:58.755794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:14.879 [2024-11-21 02:01:58.755806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.879 [2024-11-21 02:01:58.755866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:14.879 [2024-11-21 02:01:58.755876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:14.879 [2024-11-21 02:01:58.755885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:14.879 [2024-11-21 02:01:58.755893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.879 [2024-11-21 02:01:58.755972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:14.879 [2024-11-21 02:01:58.755982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:14.879 [2024-11-21 02:01:58.755991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:14.879 [2024-11-21 02:01:58.755998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.879 [2024-11-21 02:01:58.756029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:14.879 [2024-11-21 02:01:58.756039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:14.879 [2024-11-21 02:01:58.756047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:14.879 [2024-11-21 02:01:58.756056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.879 [2024-11-21 02:01:58.756096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:14.879 [2024-11-21 02:01:58.756106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:14.879 [2024-11-21 02:01:58.756115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:14.879 [2024-11-21 02:01:58.756122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.879 [2024-11-21 02:01:58.756171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:14.879 [2024-11-21 02:01:58.756183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:14.880 [2024-11-21 02:01:58.756192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:14.880 [2024-11-21 02:01:58.756200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:14.880 [2024-11-21 02:01:58.756341] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 218.974 ms, result 0 00:36:15.821 00:36:15.821 00:36:15.821 02:01:59 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:18.369 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:36:18.369 02:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:36:18.369 02:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:36:18.369 02:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:36:18.369 02:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:18.369 02:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:18.369 Process with pid 83895 is not found 00:36:18.369 02:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 83895 00:36:18.369 02:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 83895 ']' 00:36:18.369 02:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 83895 00:36:18.369 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (83895) - No such process 00:36:18.369 02:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 83895 is not found' 00:36:18.369 02:02:01 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:36:18.369 Remove shared memory files 00:36:18.369 02:02:01 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:18.369 02:02:01 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:36:18.369 02:02:01 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_156fbf37-752a-40f3-a42b-1d6f226a350c_band_md /dev/hugepages/ftl_156fbf37-752a-40f3-a42b-1d6f226a350c_l2p_l1 /dev/hugepages/ftl_156fbf37-752a-40f3-a42b-1d6f226a350c_l2p_l2 /dev/hugepages/ftl_156fbf37-752a-40f3-a42b-1d6f226a350c_l2p_l2_ctx /dev/hugepages/ftl_156fbf37-752a-40f3-a42b-1d6f226a350c_nvc_md /dev/hugepages/ftl_156fbf37-752a-40f3-a42b-1d6f226a350c_p2l_pool /dev/hugepages/ftl_156fbf37-752a-40f3-a42b-1d6f226a350c_sb /dev/hugepages/ftl_156fbf37-752a-40f3-a42b-1d6f226a350c_sb_shm /dev/hugepages/ftl_156fbf37-752a-40f3-a42b-1d6f226a350c_trim_bitmap /dev/hugepages/ftl_156fbf37-752a-40f3-a42b-1d6f226a350c_trim_log /dev/hugepages/ftl_156fbf37-752a-40f3-a42b-1d6f226a350c_trim_md /dev/hugepages/ftl_156fbf37-752a-40f3-a42b-1d6f226a350c_vmap 00:36:18.369 02:02:01 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:36:18.369 02:02:01 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:18.369 02:02:01 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:36:18.369 ************************************ 00:36:18.369 END TEST ftl_restore_fast 00:36:18.369 ************************************ 00:36:18.369 00:36:18.369 real 4m54.091s 00:36:18.369 user 4m41.897s 00:36:18.369 sys 0m11.923s 00:36:18.369 02:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:18.369 02:02:01 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:36:18.369 Process with pid 74890 is not found 00:36:18.369 02:02:01 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:36:18.369 02:02:01 ftl -- ftl/ftl.sh@14 -- # killprocess 74890 00:36:18.369 02:02:01 ftl -- common/autotest_common.sh@954 -- # '[' -z 74890 ']' 00:36:18.369 02:02:01 ftl -- common/autotest_common.sh@958 -- # kill -0 74890 00:36:18.369 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74890) - No such process 00:36:18.369 02:02:01 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 74890 is not found' 00:36:18.369 02:02:01 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:36:18.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:18.369 02:02:01 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=86879 00:36:18.369 02:02:01 ftl -- ftl/ftl.sh@20 -- # waitforlisten 86879 00:36:18.369 02:02:01 ftl -- common/autotest_common.sh@835 -- # '[' -z 86879 ']' 00:36:18.369 02:02:01 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:18.369 02:02:01 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:18.369 02:02:01 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:18.369 02:02:01 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:18.369 02:02:01 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:18.369 02:02:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:18.369 [2024-11-21 02:02:01.940394] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 24.03.0 initialization... 00:36:18.369 [2024-11-21 02:02:01.940676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86879 ] 00:36:18.369 [2024-11-21 02:02:02.098935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.369 [2024-11-21 02:02:02.255135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:19.310 02:02:02 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:19.310 02:02:02 ftl -- common/autotest_common.sh@868 -- # return 0 00:36:19.311 02:02:02 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:36:19.311 nvme0n1 00:36:19.311 02:02:03 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:36:19.311 02:02:03 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:19.311 02:02:03 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:19.571 02:02:03 ftl -- ftl/common.sh@28 -- # stores=55f008ac-a7b4-4ae3-8989-c2495a5c7dca 00:36:19.571 02:02:03 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:36:19.571 02:02:03 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 55f008ac-a7b4-4ae3-8989-c2495a5c7dca 00:36:19.832 02:02:03 ftl -- ftl/ftl.sh@23 -- # killprocess 86879 00:36:19.832 02:02:03 ftl -- common/autotest_common.sh@954 -- # '[' -z 86879 ']' 00:36:19.832 02:02:03 ftl -- common/autotest_common.sh@958 -- # kill -0 86879 00:36:19.832 02:02:03 ftl -- common/autotest_common.sh@959 -- # uname 00:36:19.832 02:02:03 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:19.832 02:02:03 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86879 00:36:19.832 killing process with pid 86879 00:36:19.832 02:02:03 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:19.832 02:02:03 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:19.832 02:02:03 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86879' 00:36:19.832 02:02:03 ftl -- common/autotest_common.sh@973 -- # kill 86879 00:36:19.832 02:02:03 ftl -- common/autotest_common.sh@978 -- # wait 86879 00:36:21.310 02:02:05 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:21.310 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:21.310 Waiting for block devices as requested 00:36:21.571 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:21.571 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:21.571 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:36:21.831 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:27.118 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:27.118 Remove shared memory files 00:36:27.118 02:02:10 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:36:27.118 02:02:10 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:27.119 02:02:10 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:36:27.119 02:02:10 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:36:27.119 02:02:10 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:36:27.119 02:02:10 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:27.119 02:02:10 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:36:27.119 00:36:27.119 real 18m19.959s 00:36:27.119 user 20m17.916s 00:36:27.119 sys 1m34.261s 00:36:27.119 02:02:10 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:27.119 ************************************ 00:36:27.119 02:02:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:27.119 END TEST ftl 00:36:27.119 ************************************ 00:36:27.119 02:02:10 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:27.119 02:02:10 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:27.119 02:02:10 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:27.119 02:02:10 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:27.119 02:02:10 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:27.119 02:02:10 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:27.119 02:02:10 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:27.119 02:02:10 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:27.119 02:02:10 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:27.119 02:02:10 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:27.119 02:02:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:27.119 02:02:10 -- common/autotest_common.sh@10 -- # set +x 00:36:27.119 02:02:10 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:27.119 02:02:10 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:27.119 02:02:10 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:27.119 02:02:10 -- common/autotest_common.sh@10 -- # set +x 00:36:28.503 INFO: APP EXITING 00:36:28.503 INFO: killing all VMs 00:36:28.503 INFO: killing vhost app 00:36:28.503 INFO: EXIT DONE 00:36:28.503 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:29.074 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:29.074 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:29.074 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:29.074 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:29.333 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:29.905 Cleaning 00:36:29.905 Removing: /var/run/dpdk/spdk0/config 00:36:29.905 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:29.905 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:29.905 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:29.905 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:29.905 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:29.905 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:29.905 Removing: /var/run/dpdk/spdk0 00:36:29.905 Removing: /var/run/dpdk/spdk_pid56898 00:36:29.905 Removing: /var/run/dpdk/spdk_pid57100 00:36:29.905 Removing: /var/run/dpdk/spdk_pid57307 00:36:29.905 Removing: /var/run/dpdk/spdk_pid57406 00:36:29.905 Removing: /var/run/dpdk/spdk_pid57445 00:36:29.905 Removing: /var/run/dpdk/spdk_pid57562 00:36:29.905 Removing: /var/run/dpdk/spdk_pid57580 00:36:29.905 Removing: /var/run/dpdk/spdk_pid57769 00:36:29.905 Removing: /var/run/dpdk/spdk_pid57867 00:36:29.905 Removing: /var/run/dpdk/spdk_pid57957 00:36:29.905 Removing: /var/run/dpdk/spdk_pid58063 00:36:29.905 Removing: /var/run/dpdk/spdk_pid58154 00:36:29.905 Removing: /var/run/dpdk/spdk_pid58194 00:36:29.905 Removing: /var/run/dpdk/spdk_pid58229 00:36:29.905 Removing: /var/run/dpdk/spdk_pid58301 00:36:29.905 Removing: /var/run/dpdk/spdk_pid58379 00:36:29.905 Removing: /var/run/dpdk/spdk_pid58810 00:36:29.905 Removing: /var/run/dpdk/spdk_pid58863 00:36:29.905 Removing: /var/run/dpdk/spdk_pid58915 00:36:29.905 Removing: /var/run/dpdk/spdk_pid58931 00:36:29.905 Removing: /var/run/dpdk/spdk_pid59033 00:36:29.905 Removing: /var/run/dpdk/spdk_pid59038 00:36:29.905 Removing: /var/run/dpdk/spdk_pid59135 00:36:29.905 Removing: /var/run/dpdk/spdk_pid59145 00:36:29.905 Removing: /var/run/dpdk/spdk_pid59204 00:36:29.905 Removing: /var/run/dpdk/spdk_pid59216 00:36:29.905 Removing: /var/run/dpdk/spdk_pid59269 00:36:29.905 Removing: /var/run/dpdk/spdk_pid59287 00:36:29.905 Removing: /var/run/dpdk/spdk_pid59442 00:36:29.905 Removing: /var/run/dpdk/spdk_pid59478 00:36:29.905 Removing: /var/run/dpdk/spdk_pid59562 00:36:29.905 Removing: /var/run/dpdk/spdk_pid59734 00:36:29.905 Removing: /var/run/dpdk/spdk_pid59817 00:36:29.905 Removing: /var/run/dpdk/spdk_pid59854 00:36:29.905 Removing: /var/run/dpdk/spdk_pid60293 00:36:29.905 Removing: /var/run/dpdk/spdk_pid60385 00:36:29.905 Removing: /var/run/dpdk/spdk_pid60500 00:36:29.905 Removing: /var/run/dpdk/spdk_pid60547 00:36:29.905 Removing: /var/run/dpdk/spdk_pid60573 00:36:29.905 Removing: /var/run/dpdk/spdk_pid60659 00:36:29.905 Removing: /var/run/dpdk/spdk_pid61276 00:36:29.905 Removing: /var/run/dpdk/spdk_pid61319 00:36:29.905 Removing: /var/run/dpdk/spdk_pid61779 00:36:29.905 Removing: /var/run/dpdk/spdk_pid61877 00:36:29.905 Removing: /var/run/dpdk/spdk_pid61988 00:36:29.905 Removing: /var/run/dpdk/spdk_pid62041 00:36:29.905 Removing: /var/run/dpdk/spdk_pid62063 00:36:29.905 Removing: /var/run/dpdk/spdk_pid62092 00:36:29.905 Removing: /var/run/dpdk/spdk_pid63928 00:36:29.905 Removing: /var/run/dpdk/spdk_pid64060 00:36:29.905 Removing: /var/run/dpdk/spdk_pid64069 00:36:29.905 Removing: /var/run/dpdk/spdk_pid64081 00:36:29.905 Removing: /var/run/dpdk/spdk_pid64121 00:36:29.905 Removing: /var/run/dpdk/spdk_pid64125 00:36:29.905 Removing: /var/run/dpdk/spdk_pid64137 00:36:29.905 Removing: /var/run/dpdk/spdk_pid64182 00:36:29.905 Removing: /var/run/dpdk/spdk_pid64186 00:36:29.905 Removing: /var/run/dpdk/spdk_pid64198 00:36:29.905 Removing: /var/run/dpdk/spdk_pid64243 00:36:29.905 Removing: /var/run/dpdk/spdk_pid64247 00:36:29.905 Removing: /var/run/dpdk/spdk_pid64259 00:36:29.905 Removing: /var/run/dpdk/spdk_pid65651 00:36:29.905 Removing: /var/run/dpdk/spdk_pid65749 00:36:29.905 Removing: /var/run/dpdk/spdk_pid67149 00:36:29.905 Removing: /var/run/dpdk/spdk_pid68900 00:36:29.905 Removing: /var/run/dpdk/spdk_pid68968 00:36:29.905 Removing: /var/run/dpdk/spdk_pid69049 00:36:30.166 Removing: /var/run/dpdk/spdk_pid69153 00:36:30.166 Removing: /var/run/dpdk/spdk_pid69245 00:36:30.166 Removing: /var/run/dpdk/spdk_pid69341 00:36:30.166 Removing: /var/run/dpdk/spdk_pid69409 00:36:30.166 Removing: /var/run/dpdk/spdk_pid69491 00:36:30.166 Removing: /var/run/dpdk/spdk_pid69596 00:36:30.166 Removing: /var/run/dpdk/spdk_pid69688 00:36:30.166 Removing: /var/run/dpdk/spdk_pid69789 00:36:30.166 Removing: /var/run/dpdk/spdk_pid69862 00:36:30.166 Removing: /var/run/dpdk/spdk_pid69933 00:36:30.166 Removing: /var/run/dpdk/spdk_pid70037 00:36:30.166 Removing: /var/run/dpdk/spdk_pid70133 00:36:30.166 Removing: /var/run/dpdk/spdk_pid70230 00:36:30.166 Removing: /var/run/dpdk/spdk_pid70299 00:36:30.166 Removing: /var/run/dpdk/spdk_pid70375 00:36:30.166 Removing: /var/run/dpdk/spdk_pid70479 00:36:30.166 Removing: /var/run/dpdk/spdk_pid70571 00:36:30.166 Removing: /var/run/dpdk/spdk_pid70666 00:36:30.166 Removing: /var/run/dpdk/spdk_pid70735 00:36:30.166 Removing: /var/run/dpdk/spdk_pid70809 00:36:30.166 Removing: /var/run/dpdk/spdk_pid70889 00:36:30.166 Removing: /var/run/dpdk/spdk_pid70958 00:36:30.166 Removing: /var/run/dpdk/spdk_pid71064 00:36:30.166 Removing: /var/run/dpdk/spdk_pid71151 00:36:30.166 Removing: /var/run/dpdk/spdk_pid71250 00:36:30.166 Removing: /var/run/dpdk/spdk_pid71322 00:36:30.166 Removing: /var/run/dpdk/spdk_pid71396 00:36:30.166 Removing: /var/run/dpdk/spdk_pid71477 00:36:30.166 Removing: /var/run/dpdk/spdk_pid71552 00:36:30.166 Removing: /var/run/dpdk/spdk_pid71650 00:36:30.166 Removing: /var/run/dpdk/spdk_pid71746 00:36:30.166 Removing: /var/run/dpdk/spdk_pid71890 00:36:30.166 Removing: /var/run/dpdk/spdk_pid72163 00:36:30.166 Removing: /var/run/dpdk/spdk_pid72205 00:36:30.166 Removing: /var/run/dpdk/spdk_pid72657 00:36:30.166 Removing: /var/run/dpdk/spdk_pid72842 00:36:30.166 Removing: /var/run/dpdk/spdk_pid72941 00:36:30.166 Removing: /var/run/dpdk/spdk_pid73051 00:36:30.166 Removing: /var/run/dpdk/spdk_pid73093 00:36:30.166 Removing: /var/run/dpdk/spdk_pid73124 00:36:30.166 Removing: /var/run/dpdk/spdk_pid73414 00:36:30.166 Removing: /var/run/dpdk/spdk_pid73473 00:36:30.166 Removing: /var/run/dpdk/spdk_pid73548 00:36:30.166 Removing: /var/run/dpdk/spdk_pid73943 00:36:30.166 Removing: /var/run/dpdk/spdk_pid74091 00:36:30.166 Removing: /var/run/dpdk/spdk_pid74890 00:36:30.166 Removing: /var/run/dpdk/spdk_pid75029 00:36:30.166 Removing: /var/run/dpdk/spdk_pid75187 00:36:30.166 Removing: /var/run/dpdk/spdk_pid75284 00:36:30.166 Removing: /var/run/dpdk/spdk_pid75598 00:36:30.166 Removing: /var/run/dpdk/spdk_pid75840 00:36:30.166 Removing: /var/run/dpdk/spdk_pid76182 00:36:30.166 Removing: /var/run/dpdk/spdk_pid76364 00:36:30.166 Removing: /var/run/dpdk/spdk_pid76566 00:36:30.166 Removing: /var/run/dpdk/spdk_pid76624 00:36:30.166 Removing: /var/run/dpdk/spdk_pid76866 00:36:30.166 Removing: /var/run/dpdk/spdk_pid76898 00:36:30.166 Removing: /var/run/dpdk/spdk_pid76951 00:36:30.166 Removing: /var/run/dpdk/spdk_pid77200 00:36:30.166 Removing: /var/run/dpdk/spdk_pid77436 00:36:30.166 Removing: /var/run/dpdk/spdk_pid78073 00:36:30.166 Removing: /var/run/dpdk/spdk_pid78814 00:36:30.166 Removing: /var/run/dpdk/spdk_pid79391 00:36:30.166 Removing: /var/run/dpdk/spdk_pid80255 00:36:30.166 Removing: /var/run/dpdk/spdk_pid80402 00:36:30.166 Removing: /var/run/dpdk/spdk_pid80485 00:36:30.166 Removing: /var/run/dpdk/spdk_pid80987 00:36:30.166 Removing: /var/run/dpdk/spdk_pid81045 00:36:30.166 Removing: /var/run/dpdk/spdk_pid81559 00:36:30.166 Removing: /var/run/dpdk/spdk_pid82084 00:36:30.166 Removing: /var/run/dpdk/spdk_pid82880 00:36:30.166 Removing: /var/run/dpdk/spdk_pid83007 00:36:30.166 Removing: /var/run/dpdk/spdk_pid83055 00:36:30.166 Removing: /var/run/dpdk/spdk_pid83119 00:36:30.166 Removing: /var/run/dpdk/spdk_pid83175 00:36:30.166 Removing: /var/run/dpdk/spdk_pid83228 00:36:30.166 Removing: /var/run/dpdk/spdk_pid83404 00:36:30.166 Removing: /var/run/dpdk/spdk_pid83484 00:36:30.166 Removing: /var/run/dpdk/spdk_pid83552 00:36:30.166 Removing: /var/run/dpdk/spdk_pid83619 00:36:30.166 Removing: /var/run/dpdk/spdk_pid83648 00:36:30.166 Removing: /var/run/dpdk/spdk_pid83705 00:36:30.166 Removing: /var/run/dpdk/spdk_pid83895 00:36:30.166 Removing: /var/run/dpdk/spdk_pid84123 00:36:30.167 Removing: /var/run/dpdk/spdk_pid84945 00:36:30.167 Removing: /var/run/dpdk/spdk_pid85611 00:36:30.167 Removing: /var/run/dpdk/spdk_pid86191 00:36:30.167 Removing: /var/run/dpdk/spdk_pid86879 00:36:30.167 Clean 00:36:30.428 02:02:14 -- common/autotest_common.sh@1453 -- # return 0 00:36:30.428 02:02:14 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:30.428 02:02:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:30.428 02:02:14 -- common/autotest_common.sh@10 -- # set +x 00:36:30.428 02:02:14 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:30.428 02:02:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:30.428 02:02:14 -- common/autotest_common.sh@10 -- # set +x 00:36:30.428 02:02:14 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:30.428 02:02:14 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:30.428 02:02:14 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:30.428 02:02:14 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:30.428 02:02:14 -- spdk/autotest.sh@398 -- # hostname 00:36:30.428 02:02:14 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:30.689 geninfo: WARNING: invalid characters removed from testname! 00:36:57.273 02:02:39 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:59.823 02:02:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:02.370 02:02:46 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:05.672 02:02:48 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:08.221 02:02:51 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:10.767 02:02:54 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:13.315 02:02:56 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:13.315 02:02:56 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:13.315 02:02:56 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:37:13.315 02:02:56 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:13.315 02:02:56 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:13.315 02:02:56 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:13.315 + [[ -n 5041 ]] 00:37:13.315 + sudo kill 5041 00:37:13.326 [Pipeline] } 00:37:13.344 [Pipeline] // timeout 00:37:13.350 [Pipeline] } 00:37:13.368 [Pipeline] // stage 00:37:13.374 [Pipeline] } 00:37:13.393 [Pipeline] // catchError 00:37:13.403 [Pipeline] stage 00:37:13.406 [Pipeline] { (Stop VM) 00:37:13.421 [Pipeline] sh 00:37:13.705 + vagrant halt 00:37:16.245 ==> default: Halting domain... 00:37:21.600 [Pipeline] sh 00:37:21.883 + vagrant destroy -f 00:37:24.429 ==> default: Removing domain... 00:37:25.014 [Pipeline] sh 00:37:25.299 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:37:25.310 [Pipeline] } 00:37:25.325 [Pipeline] // stage 00:37:25.330 [Pipeline] } 00:37:25.345 [Pipeline] // dir 00:37:25.355 [Pipeline] } 00:37:25.384 [Pipeline] // wrap 00:37:25.392 [Pipeline] } 00:37:25.419 [Pipeline] // catchError 00:37:25.426 [Pipeline] stage 00:37:25.428 [Pipeline] { (Epilogue) 00:37:25.437 [Pipeline] sh 00:37:25.719 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:32.319 [Pipeline] catchError 00:37:32.321 [Pipeline] { 00:37:32.335 [Pipeline] sh 00:37:32.620 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:32.620 Artifacts sizes are good 00:37:32.631 [Pipeline] } 00:37:32.645 [Pipeline] // catchError 00:37:32.657 [Pipeline] archiveArtifacts 00:37:32.666 Archiving artifacts 00:37:32.766 [Pipeline] cleanWs 00:37:32.779 [WS-CLEANUP] Deleting project workspace... 00:37:32.779 [WS-CLEANUP] Deferred wipeout is used... 00:37:32.786 [WS-CLEANUP] done 00:37:32.788 [Pipeline] } 00:37:32.804 [Pipeline] // stage 00:37:32.810 [Pipeline] } 00:37:32.824 [Pipeline] // node 00:37:32.830 [Pipeline] End of Pipeline 00:37:32.882 Finished: SUCCESS