00:00:00.001 Started by upstream project "autotest-per-patch" build number 131817 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.072 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.072 The recommended git tool is: git 00:00:00.072 using credential 00000000-0000-0000-0000-000000000002 00:00:00.075 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.109 Fetching changes from the remote Git repository 00:00:00.116 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.167 Using shallow fetch with depth 1 00:00:00.167 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.167 > git --version # timeout=10 00:00:00.206 > git --version # 'git version 2.39.2' 00:00:00.206 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.233 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.233 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.399 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.411 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.423 Checking out Revision 58e4f482292076ec19d68e6712473e60ef956aed (FETCH_HEAD) 00:00:05.423 > git config core.sparsecheckout # timeout=10 00:00:05.432 > git read-tree -mu HEAD # timeout=10 00:00:05.450 > git checkout -f 58e4f482292076ec19d68e6712473e60ef956aed # timeout=5 00:00:05.475 Commit message: "packer: Fix typo in a package name" 00:00:05.476 > git rev-list --no-walk 58e4f482292076ec19d68e6712473e60ef956aed # timeout=10 00:00:05.591 [Pipeline] Start of Pipeline 00:00:05.603 [Pipeline] library 00:00:05.605 Loading library shm_lib@master 00:00:05.605 Library shm_lib@master is cached. Copying from home. 00:00:05.623 [Pipeline] node 00:00:05.630 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.631 [Pipeline] { 00:00:05.641 [Pipeline] catchError 00:00:05.643 [Pipeline] { 00:00:05.655 [Pipeline] wrap 00:00:05.664 [Pipeline] { 00:00:05.669 [Pipeline] stage 00:00:05.670 [Pipeline] { (Prologue) 00:00:05.685 [Pipeline] echo 00:00:05.686 Node: VM-host-WFP1 00:00:05.691 [Pipeline] cleanWs 00:00:05.700 [WS-CLEANUP] Deleting project workspace... 00:00:05.700 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.704 [WS-CLEANUP] done 00:00:05.955 [Pipeline] setCustomBuildProperty 00:00:06.024 [Pipeline] httpRequest 00:00:06.625 [Pipeline] echo 00:00:06.626 Sorcerer 10.211.164.101 is alive 00:00:06.636 [Pipeline] retry 00:00:06.639 [Pipeline] { 00:00:06.654 [Pipeline] httpRequest 00:00:06.658 HttpMethod: GET 00:00:06.659 URL: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:06.659 Sending request to url: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:06.678 Response Code: HTTP/1.1 200 OK 00:00:06.679 Success: Status code 200 is in the accepted range: 200,404 00:00:06.679 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:09.891 [Pipeline] } 00:00:09.910 [Pipeline] // retry 00:00:09.918 [Pipeline] sh 00:00:10.200 + tar --no-same-owner -xf jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:10.216 [Pipeline] httpRequest 00:00:10.578 [Pipeline] echo 00:00:10.580 Sorcerer 10.211.164.101 is alive 00:00:10.591 [Pipeline] retry 00:00:10.593 [Pipeline] { 00:00:10.609 [Pipeline] httpRequest 00:00:10.648 HttpMethod: GET 00:00:10.649 URL: http://10.211.164.101/packages/spdk_183001ebcdbbbbf8a778999ba30fdf72d5b4fe4e.tar.gz 00:00:10.650 Sending request to url: http://10.211.164.101/packages/spdk_183001ebcdbbbbf8a778999ba30fdf72d5b4fe4e.tar.gz 00:00:10.654 Response Code: HTTP/1.1 200 OK 00:00:10.655 Success: Status code 200 is in the accepted range: 200,404 00:00:10.655 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_183001ebcdbbbbf8a778999ba30fdf72d5b4fe4e.tar.gz 00:00:34.490 [Pipeline] } 00:00:34.508 [Pipeline] // retry 00:00:34.515 [Pipeline] sh 00:00:34.800 + tar --no-same-owner -xf spdk_183001ebcdbbbbf8a778999ba30fdf72d5b4fe4e.tar.gz 00:00:37.348 [Pipeline] sh 00:00:37.631 + git -C spdk log --oneline -n5 00:00:37.631 183001ebc bdev/nvme: Fix race between IO channel creation and reconnection 00:00:37.631 cab1decc1 thread: add NUMA node support to spdk_iobuf_put() 00:00:37.631 40c9acf6d env: add spdk_mem_get_numa_id 00:00:37.631 0f99ab2fa thread: allocate iobuf memory based on numa_id 00:00:37.631 2ef611c19 thread: update all iobuf non-get/put functions for multiple NUMA nodes 00:00:37.651 [Pipeline] writeFile 00:00:37.667 [Pipeline] sh 00:00:37.957 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:37.970 [Pipeline] sh 00:00:38.253 + cat autorun-spdk.conf 00:00:38.253 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.253 SPDK_TEST_NVME=1 00:00:38.253 SPDK_TEST_FTL=1 00:00:38.253 SPDK_TEST_ISAL=1 00:00:38.253 SPDK_RUN_ASAN=1 00:00:38.253 SPDK_RUN_UBSAN=1 00:00:38.253 SPDK_TEST_XNVME=1 00:00:38.253 SPDK_TEST_NVME_FDP=1 00:00:38.253 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:38.260 RUN_NIGHTLY=0 00:00:38.262 [Pipeline] } 00:00:38.274 [Pipeline] // stage 00:00:38.287 [Pipeline] stage 00:00:38.288 [Pipeline] { (Run VM) 00:00:38.298 [Pipeline] sh 00:00:38.579 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:38.579 + echo 'Start stage prepare_nvme.sh' 00:00:38.579 Start stage prepare_nvme.sh 00:00:38.579 + [[ -n 1 ]] 00:00:38.579 + disk_prefix=ex1 00:00:38.579 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:38.579 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:38.579 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:38.579 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.579 ++ SPDK_TEST_NVME=1 00:00:38.579 ++ SPDK_TEST_FTL=1 00:00:38.579 ++ SPDK_TEST_ISAL=1 00:00:38.579 ++ SPDK_RUN_ASAN=1 00:00:38.579 ++ SPDK_RUN_UBSAN=1 00:00:38.579 ++ SPDK_TEST_XNVME=1 00:00:38.579 ++ SPDK_TEST_NVME_FDP=1 00:00:38.579 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:38.579 ++ RUN_NIGHTLY=0 00:00:38.579 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:38.579 + nvme_files=() 00:00:38.579 + declare -A nvme_files 00:00:38.579 + backend_dir=/var/lib/libvirt/images/backends 00:00:38.579 + nvme_files['nvme.img']=5G 00:00:38.579 + nvme_files['nvme-cmb.img']=5G 00:00:38.579 + nvme_files['nvme-multi0.img']=4G 00:00:38.579 + nvme_files['nvme-multi1.img']=4G 00:00:38.579 + nvme_files['nvme-multi2.img']=4G 00:00:38.579 + nvme_files['nvme-openstack.img']=8G 00:00:38.579 + nvme_files['nvme-zns.img']=5G 00:00:38.579 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:38.579 + (( SPDK_TEST_FTL == 1 )) 00:00:38.579 + nvme_files["nvme-ftl.img"]=6G 00:00:38.579 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:38.579 + nvme_files["nvme-fdp.img"]=1G 00:00:38.579 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:38.579 + for nvme in "${!nvme_files[@]}" 00:00:38.579 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:38.579 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:38.579 + for nvme in "${!nvme_files[@]}" 00:00:38.579 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:00:38.839 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:38.839 + for nvme in "${!nvme_files[@]}" 00:00:38.839 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:38.839 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:38.839 + for nvme in "${!nvme_files[@]}" 00:00:38.839 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:38.839 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:38.839 + for nvme in "${!nvme_files[@]}" 00:00:38.839 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:38.839 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:38.839 + for nvme in "${!nvme_files[@]}" 00:00:38.839 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:39.097 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:39.097 + for nvme in "${!nvme_files[@]}" 00:00:39.097 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:39.357 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:39.357 + for nvme in "${!nvme_files[@]}" 00:00:39.357 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:00:39.357 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:39.357 + for nvme in "${!nvme_files[@]}" 00:00:39.357 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:39.616 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:39.616 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:39.616 + echo 'End stage prepare_nvme.sh' 00:00:39.616 End stage prepare_nvme.sh 00:00:39.630 [Pipeline] sh 00:00:39.969 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:39.969 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:39.969 00:00:39.969 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:39.969 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:39.969 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:39.969 HELP=0 00:00:39.969 DRY_RUN=0 00:00:39.969 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:00:39.969 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:39.969 NVME_AUTO_CREATE=0 00:00:39.969 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:00:39.969 NVME_CMB=,,,, 00:00:39.969 NVME_PMR=,,,, 00:00:39.969 NVME_ZNS=,,,, 00:00:39.969 NVME_MS=true,,,, 00:00:39.969 NVME_FDP=,,,on, 00:00:39.969 SPDK_VAGRANT_DISTRO=fedora39 00:00:39.969 SPDK_VAGRANT_VMCPU=10 00:00:39.969 SPDK_VAGRANT_VMRAM=12288 00:00:39.969 SPDK_VAGRANT_PROVIDER=libvirt 00:00:39.969 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:39.969 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:39.969 SPDK_OPENSTACK_NETWORK=0 00:00:39.969 VAGRANT_PACKAGE_BOX=0 00:00:39.969 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:39.969 FORCE_DISTRO=true 00:00:39.969 VAGRANT_BOX_VERSION= 00:00:39.969 EXTRA_VAGRANTFILES= 00:00:39.969 NIC_MODEL=e1000 00:00:39.969 00:00:39.969 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:39.969 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:43.260 Bringing machine 'default' up with 'libvirt' provider... 00:00:44.194 ==> default: Creating image (snapshot of base box volume). 00:00:44.452 ==> default: Creating domain with the following settings... 00:00:44.452 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1729868726_0e7a8fb68fad5fd72cac 00:00:44.452 ==> default: -- Domain type: kvm 00:00:44.452 ==> default: -- Cpus: 10 00:00:44.452 ==> default: -- Feature: acpi 00:00:44.452 ==> default: -- Feature: apic 00:00:44.452 ==> default: -- Feature: pae 00:00:44.452 ==> default: -- Memory: 12288M 00:00:44.452 ==> default: -- Memory Backing: hugepages: 00:00:44.452 ==> default: -- Management MAC: 00:00:44.452 ==> default: -- Loader: 00:00:44.452 ==> default: -- Nvram: 00:00:44.452 ==> default: -- Base box: spdk/fedora39 00:00:44.452 ==> default: -- Storage pool: default 00:00:44.452 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1729868726_0e7a8fb68fad5fd72cac.img (20G) 00:00:44.452 ==> default: -- Volume Cache: default 00:00:44.452 ==> default: -- Kernel: 00:00:44.452 ==> default: -- Initrd: 00:00:44.452 ==> default: -- Graphics Type: vnc 00:00:44.452 ==> default: -- Graphics Port: -1 00:00:44.452 ==> default: -- Graphics IP: 127.0.0.1 00:00:44.452 ==> default: -- Graphics Password: Not defined 00:00:44.452 ==> default: -- Video Type: cirrus 00:00:44.452 ==> default: -- Video VRAM: 9216 00:00:44.452 ==> default: -- Sound Type: 00:00:44.452 ==> default: -- Keymap: en-us 00:00:44.452 ==> default: -- TPM Path: 00:00:44.452 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:44.452 ==> default: -- Command line args: 00:00:44.452 ==> default: -> value=-device, 00:00:44.452 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:44.452 ==> default: -> value=-drive, 00:00:44.452 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:44.452 ==> default: -> value=-device, 00:00:44.452 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:44.452 ==> default: -> value=-device, 00:00:44.452 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:44.452 ==> default: -> value=-drive, 00:00:44.452 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:00:44.452 ==> default: -> value=-device, 00:00:44.452 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:44.452 ==> default: -> value=-device, 00:00:44.452 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:44.452 ==> default: -> value=-drive, 00:00:44.452 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:44.452 ==> default: -> value=-device, 00:00:44.452 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:44.452 ==> default: -> value=-drive, 00:00:44.452 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:44.452 ==> default: -> value=-device, 00:00:44.452 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:44.452 ==> default: -> value=-drive, 00:00:44.452 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:44.452 ==> default: -> value=-device, 00:00:44.452 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:44.452 ==> default: -> value=-device, 00:00:44.452 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:44.452 ==> default: -> value=-device, 00:00:44.452 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:44.452 ==> default: -> value=-drive, 00:00:44.452 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:44.452 ==> default: -> value=-device, 00:00:44.452 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:44.711 ==> default: Creating shared folders metadata... 00:00:44.711 ==> default: Starting domain. 00:00:46.620 ==> default: Waiting for domain to get an IP address... 00:01:04.726 ==> default: Waiting for SSH to become available... 00:01:04.726 ==> default: Configuring and enabling network interfaces... 00:01:08.949 default: SSH address: 192.168.121.152:22 00:01:08.949 default: SSH username: vagrant 00:01:08.949 default: SSH auth method: private key 00:01:12.237 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:20.358 ==> default: Mounting SSHFS shared folder... 00:01:22.894 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:22.894 ==> default: Checking Mount.. 00:01:24.271 ==> default: Folder Successfully Mounted! 00:01:24.271 ==> default: Running provisioner: file... 00:01:25.648 default: ~/.gitconfig => .gitconfig 00:01:25.907 00:01:25.907 SUCCESS! 00:01:25.907 00:01:25.907 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:25.907 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:25.907 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:25.907 00:01:25.916 [Pipeline] } 00:01:25.930 [Pipeline] // stage 00:01:25.939 [Pipeline] dir 00:01:25.939 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:25.941 [Pipeline] { 00:01:25.953 [Pipeline] catchError 00:01:25.955 [Pipeline] { 00:01:25.967 [Pipeline] sh 00:01:26.249 + vagrant ssh-config --host vagrant 00:01:26.249 + sed -ne /^Host/,$p 00:01:26.249 + tee ssh_conf 00:01:29.539 Host vagrant 00:01:29.539 HostName 192.168.121.152 00:01:29.539 User vagrant 00:01:29.539 Port 22 00:01:29.539 UserKnownHostsFile /dev/null 00:01:29.539 StrictHostKeyChecking no 00:01:29.539 PasswordAuthentication no 00:01:29.539 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:29.539 IdentitiesOnly yes 00:01:29.539 LogLevel FATAL 00:01:29.539 ForwardAgent yes 00:01:29.539 ForwardX11 yes 00:01:29.539 00:01:29.552 [Pipeline] withEnv 00:01:29.554 [Pipeline] { 00:01:29.567 [Pipeline] sh 00:01:29.845 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:29.845 source /etc/os-release 00:01:29.845 [[ -e /image.version ]] && img=$(< /image.version) 00:01:29.845 # Minimal, systemd-like check. 00:01:29.845 if [[ -e /.dockerenv ]]; then 00:01:29.845 # Clear garbage from the node's name: 00:01:29.845 # agt-er_autotest_547-896 -> autotest_547-896 00:01:29.845 # $HOSTNAME is the actual container id 00:01:29.845 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:29.845 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:29.845 # We can assume this is a mount from a host where container is running, 00:01:29.845 # so fetch its hostname to easily identify the target swarm worker. 00:01:29.845 container="$(< /etc/hostname) ($agent)" 00:01:29.845 else 00:01:29.845 # Fallback 00:01:29.845 container=$agent 00:01:29.845 fi 00:01:29.845 fi 00:01:29.845 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:29.845 00:01:30.116 [Pipeline] } 00:01:30.136 [Pipeline] // withEnv 00:01:30.146 [Pipeline] setCustomBuildProperty 00:01:30.164 [Pipeline] stage 00:01:30.167 [Pipeline] { (Tests) 00:01:30.188 [Pipeline] sh 00:01:30.472 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:30.742 [Pipeline] sh 00:01:31.021 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:31.293 [Pipeline] timeout 00:01:31.293 Timeout set to expire in 50 min 00:01:31.295 [Pipeline] { 00:01:31.309 [Pipeline] sh 00:01:31.585 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:32.150 HEAD is now at 183001ebc bdev/nvme: Fix race between IO channel creation and reconnection 00:01:32.161 [Pipeline] sh 00:01:32.438 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:32.769 [Pipeline] sh 00:01:33.048 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:33.319 [Pipeline] sh 00:01:33.595 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:33.852 ++ readlink -f spdk_repo 00:01:33.852 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:33.852 + [[ -n /home/vagrant/spdk_repo ]] 00:01:33.852 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:33.852 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:33.852 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:33.852 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:33.852 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:33.852 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:33.852 + cd /home/vagrant/spdk_repo 00:01:33.852 + source /etc/os-release 00:01:33.852 ++ NAME='Fedora Linux' 00:01:33.852 ++ VERSION='39 (Cloud Edition)' 00:01:33.852 ++ ID=fedora 00:01:33.852 ++ VERSION_ID=39 00:01:33.852 ++ VERSION_CODENAME= 00:01:33.852 ++ PLATFORM_ID=platform:f39 00:01:33.852 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:33.852 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:33.852 ++ LOGO=fedora-logo-icon 00:01:33.852 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:33.852 ++ HOME_URL=https://fedoraproject.org/ 00:01:33.852 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:33.852 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:33.852 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:33.852 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:33.852 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:33.852 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:33.852 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:33.852 ++ SUPPORT_END=2024-11-12 00:01:33.852 ++ VARIANT='Cloud Edition' 00:01:33.852 ++ VARIANT_ID=cloud 00:01:33.852 + uname -a 00:01:33.852 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:33.852 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:34.416 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:34.674 Hugepages 00:01:34.674 node hugesize free / total 00:01:34.674 node0 1048576kB 0 / 0 00:01:34.674 node0 2048kB 0 / 0 00:01:34.674 00:01:34.674 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:34.674 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:34.674 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:34.674 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:34.674 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:34.674 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:34.674 + rm -f /tmp/spdk-ld-path 00:01:34.931 + source autorun-spdk.conf 00:01:34.931 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.931 ++ SPDK_TEST_NVME=1 00:01:34.931 ++ SPDK_TEST_FTL=1 00:01:34.931 ++ SPDK_TEST_ISAL=1 00:01:34.931 ++ SPDK_RUN_ASAN=1 00:01:34.931 ++ SPDK_RUN_UBSAN=1 00:01:34.931 ++ SPDK_TEST_XNVME=1 00:01:34.931 ++ SPDK_TEST_NVME_FDP=1 00:01:34.931 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:34.931 ++ RUN_NIGHTLY=0 00:01:34.931 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:34.931 + [[ -n '' ]] 00:01:34.931 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:34.931 + for M in /var/spdk/build-*-manifest.txt 00:01:34.931 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:34.931 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:34.931 + for M in /var/spdk/build-*-manifest.txt 00:01:34.931 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:34.931 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:34.931 + for M in /var/spdk/build-*-manifest.txt 00:01:34.931 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:34.931 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:34.931 ++ uname 00:01:34.931 + [[ Linux == \L\i\n\u\x ]] 00:01:34.931 + sudo dmesg -T 00:01:34.931 + sudo dmesg --clear 00:01:34.931 + dmesg_pid=5249 00:01:34.931 + sudo dmesg -Tw 00:01:34.931 + [[ Fedora Linux == FreeBSD ]] 00:01:34.931 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:34.931 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:34.931 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:34.931 + [[ -x /usr/src/fio-static/fio ]] 00:01:34.931 + export FIO_BIN=/usr/src/fio-static/fio 00:01:34.931 + FIO_BIN=/usr/src/fio-static/fio 00:01:34.931 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:34.931 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:34.931 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:34.931 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:34.931 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:34.931 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:34.931 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:34.931 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:34.931 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:34.931 Test configuration: 00:01:34.931 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.931 SPDK_TEST_NVME=1 00:01:34.931 SPDK_TEST_FTL=1 00:01:34.931 SPDK_TEST_ISAL=1 00:01:34.931 SPDK_RUN_ASAN=1 00:01:34.931 SPDK_RUN_UBSAN=1 00:01:34.931 SPDK_TEST_XNVME=1 00:01:34.931 SPDK_TEST_NVME_FDP=1 00:01:34.931 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:35.189 RUN_NIGHTLY=0 15:06:17 -- common/autotest_common.sh@1688 -- $ [[ n == y ]] 00:01:35.189 15:06:17 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:35.189 15:06:17 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:35.189 15:06:17 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:35.189 15:06:17 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:35.189 15:06:17 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:35.189 15:06:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.189 15:06:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.189 15:06:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.189 15:06:17 -- paths/export.sh@5 -- $ export PATH 00:01:35.189 15:06:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.189 15:06:17 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:35.189 15:06:17 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:35.189 15:06:17 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729868777.XXXXXX 00:01:35.189 15:06:17 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729868777.xD80Ao 00:01:35.189 15:06:17 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:35.189 15:06:17 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:35.189 15:06:17 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:35.189 15:06:17 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:35.189 15:06:17 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:35.189 15:06:17 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:35.189 15:06:17 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:35.189 15:06:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:35.190 15:06:17 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:35.190 15:06:17 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:35.190 15:06:17 -- pm/common@17 -- $ local monitor 00:01:35.190 15:06:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.190 15:06:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.190 15:06:17 -- pm/common@21 -- $ date +%s 00:01:35.190 15:06:17 -- pm/common@25 -- $ sleep 1 00:01:35.190 15:06:17 -- pm/common@21 -- $ date +%s 00:01:35.190 15:06:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1729868777 00:01:35.190 15:06:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1729868777 00:01:35.190 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1729868777_collect-cpu-load.pm.log 00:01:35.190 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1729868777_collect-vmstat.pm.log 00:01:36.124 15:06:18 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:36.124 15:06:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:36.124 15:06:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:36.124 15:06:18 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:36.124 15:06:18 -- spdk/autobuild.sh@16 -- $ date -u 00:01:36.124 Fri Oct 25 03:06:18 PM UTC 2024 00:01:36.124 15:06:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:36.124 v25.01-pre-118-g183001ebc 00:01:36.124 15:06:18 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:36.124 15:06:18 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:36.124 15:06:18 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:36.124 15:06:18 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:36.124 15:06:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.124 ************************************ 00:01:36.124 START TEST asan 00:01:36.124 ************************************ 00:01:36.124 using asan 00:01:36.124 15:06:18 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:36.124 00:01:36.124 real 0m0.000s 00:01:36.124 user 0m0.000s 00:01:36.124 sys 0m0.000s 00:01:36.124 15:06:18 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:36.124 15:06:18 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:36.124 ************************************ 00:01:36.124 END TEST asan 00:01:36.124 ************************************ 00:01:36.124 15:06:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:36.124 15:06:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:36.124 15:06:18 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:36.124 15:06:18 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:36.124 15:06:18 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.124 ************************************ 00:01:36.124 START TEST ubsan 00:01:36.124 ************************************ 00:01:36.124 using ubsan 00:01:36.124 15:06:18 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:36.124 00:01:36.124 real 0m0.000s 00:01:36.124 user 0m0.000s 00:01:36.124 sys 0m0.000s 00:01:36.124 15:06:18 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:36.124 15:06:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:36.124 ************************************ 00:01:36.124 END TEST ubsan 00:01:36.124 ************************************ 00:01:36.383 15:06:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:36.383 15:06:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:36.383 15:06:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:36.383 15:06:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:36.383 15:06:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:36.383 15:06:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:36.383 15:06:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:36.383 15:06:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:36.383 15:06:18 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:36.383 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:36.383 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:36.948 Using 'verbs' RDMA provider 00:01:53.220 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:11.325 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:11.325 Creating mk/config.mk...done. 00:02:11.325 Creating mk/cc.flags.mk...done. 00:02:11.325 Type 'make' to build. 00:02:11.325 15:06:52 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:11.325 15:06:52 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:11.325 15:06:52 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:11.325 15:06:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.325 ************************************ 00:02:11.325 START TEST make 00:02:11.325 ************************************ 00:02:11.325 15:06:52 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:11.325 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:11.325 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:11.325 meson setup builddir \ 00:02:11.325 -Dwith-libaio=enabled \ 00:02:11.325 -Dwith-liburing=enabled \ 00:02:11.325 -Dwith-libvfn=disabled \ 00:02:11.325 -Dwith-spdk=disabled \ 00:02:11.325 -Dexamples=false \ 00:02:11.325 -Dtests=false \ 00:02:11.325 -Dtools=false && \ 00:02:11.325 meson compile -C builddir && \ 00:02:11.325 cd -) 00:02:11.325 make[1]: Nothing to be done for 'all'. 00:02:12.293 The Meson build system 00:02:12.293 Version: 1.5.0 00:02:12.293 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:12.293 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:12.293 Build type: native build 00:02:12.293 Project name: xnvme 00:02:12.293 Project version: 0.7.5 00:02:12.293 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:12.293 C linker for the host machine: cc ld.bfd 2.40-14 00:02:12.293 Host machine cpu family: x86_64 00:02:12.293 Host machine cpu: x86_64 00:02:12.293 Message: host_machine.system: linux 00:02:12.293 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:12.293 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:12.293 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:12.293 Run-time dependency threads found: YES 00:02:12.293 Has header "setupapi.h" : NO 00:02:12.293 Has header "linux/blkzoned.h" : YES 00:02:12.293 Has header "linux/blkzoned.h" : YES (cached) 00:02:12.293 Has header "libaio.h" : YES 00:02:12.293 Library aio found: YES 00:02:12.293 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:12.293 Run-time dependency liburing found: YES 2.2 00:02:12.293 Dependency libvfn skipped: feature with-libvfn disabled 00:02:12.293 Found CMake: /usr/bin/cmake (3.27.7) 00:02:12.293 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:12.293 Subproject spdk : skipped: feature with-spdk disabled 00:02:12.293 Run-time dependency appleframeworks found: NO (tried framework) 00:02:12.293 Run-time dependency appleframeworks found: NO (tried framework) 00:02:12.293 Library rt found: YES 00:02:12.293 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:12.293 Configuring xnvme_config.h using configuration 00:02:12.293 Configuring xnvme.spec using configuration 00:02:12.293 Run-time dependency bash-completion found: YES 2.11 00:02:12.293 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:12.293 Program cp found: YES (/usr/bin/cp) 00:02:12.293 Build targets in project: 3 00:02:12.293 00:02:12.293 xnvme 0.7.5 00:02:12.293 00:02:12.293 Subprojects 00:02:12.293 spdk : NO Feature 'with-spdk' disabled 00:02:12.293 00:02:12.293 User defined options 00:02:12.293 examples : false 00:02:12.293 tests : false 00:02:12.293 tools : false 00:02:12.293 with-libaio : enabled 00:02:12.293 with-liburing: enabled 00:02:12.293 with-libvfn : disabled 00:02:12.293 with-spdk : disabled 00:02:12.293 00:02:12.293 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.576 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:12.576 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:12.833 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:12.833 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:12.833 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:12.833 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:12.833 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:12.833 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:12.833 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:12.833 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:12.833 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:12.833 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:12.833 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:12.833 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:12.833 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:12.833 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:12.833 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:12.833 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:12.833 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:12.833 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:12.833 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:12.833 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:13.090 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:13.090 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:13.090 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:13.090 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:13.090 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:13.090 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:13.090 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:13.090 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:13.090 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:13.090 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:13.090 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:13.090 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:13.090 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:13.090 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:13.090 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:13.090 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:13.090 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:13.090 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:13.090 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:13.090 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:13.090 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:13.090 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:13.090 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:13.090 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:13.090 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:13.090 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:13.090 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:13.090 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:13.090 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:13.090 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:13.090 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:13.090 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:13.090 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:13.346 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:13.346 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:13.346 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:13.346 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:13.346 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:13.346 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:13.346 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:13.346 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:13.346 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:13.346 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:13.346 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:13.346 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:13.346 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:13.346 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:13.346 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:13.346 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:13.346 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:13.346 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:13.602 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:13.858 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:13.858 [75/76] Linking static target lib/libxnvme.a 00:02:13.858 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:13.858 INFO: autodetecting backend as ninja 00:02:13.858 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:13.858 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:20.403 The Meson build system 00:02:20.403 Version: 1.5.0 00:02:20.403 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:20.403 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:20.403 Build type: native build 00:02:20.403 Program cat found: YES (/usr/bin/cat) 00:02:20.403 Project name: DPDK 00:02:20.403 Project version: 24.03.0 00:02:20.403 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:20.403 C linker for the host machine: cc ld.bfd 2.40-14 00:02:20.403 Host machine cpu family: x86_64 00:02:20.403 Host machine cpu: x86_64 00:02:20.403 Message: ## Building in Developer Mode ## 00:02:20.403 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:20.403 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:20.403 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:20.403 Program python3 found: YES (/usr/bin/python3) 00:02:20.403 Program cat found: YES (/usr/bin/cat) 00:02:20.403 Compiler for C supports arguments -march=native: YES 00:02:20.403 Checking for size of "void *" : 8 00:02:20.403 Checking for size of "void *" : 8 (cached) 00:02:20.403 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:20.403 Library m found: YES 00:02:20.403 Library numa found: YES 00:02:20.403 Has header "numaif.h" : YES 00:02:20.403 Library fdt found: NO 00:02:20.403 Library execinfo found: NO 00:02:20.403 Has header "execinfo.h" : YES 00:02:20.403 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:20.403 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:20.403 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:20.403 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:20.403 Run-time dependency openssl found: YES 3.1.1 00:02:20.403 Run-time dependency libpcap found: YES 1.10.4 00:02:20.403 Has header "pcap.h" with dependency libpcap: YES 00:02:20.403 Compiler for C supports arguments -Wcast-qual: YES 00:02:20.403 Compiler for C supports arguments -Wdeprecated: YES 00:02:20.403 Compiler for C supports arguments -Wformat: YES 00:02:20.403 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:20.403 Compiler for C supports arguments -Wformat-security: NO 00:02:20.403 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:20.403 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:20.403 Compiler for C supports arguments -Wnested-externs: YES 00:02:20.403 Compiler for C supports arguments -Wold-style-definition: YES 00:02:20.403 Compiler for C supports arguments -Wpointer-arith: YES 00:02:20.403 Compiler for C supports arguments -Wsign-compare: YES 00:02:20.403 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:20.403 Compiler for C supports arguments -Wundef: YES 00:02:20.403 Compiler for C supports arguments -Wwrite-strings: YES 00:02:20.403 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:20.403 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:20.403 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:20.403 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:20.403 Program objdump found: YES (/usr/bin/objdump) 00:02:20.403 Compiler for C supports arguments -mavx512f: YES 00:02:20.403 Checking if "AVX512 checking" compiles: YES 00:02:20.403 Fetching value of define "__SSE4_2__" : 1 00:02:20.403 Fetching value of define "__AES__" : 1 00:02:20.403 Fetching value of define "__AVX__" : 1 00:02:20.403 Fetching value of define "__AVX2__" : 1 00:02:20.403 Fetching value of define "__AVX512BW__" : 1 00:02:20.403 Fetching value of define "__AVX512CD__" : 1 00:02:20.403 Fetching value of define "__AVX512DQ__" : 1 00:02:20.403 Fetching value of define "__AVX512F__" : 1 00:02:20.403 Fetching value of define "__AVX512VL__" : 1 00:02:20.403 Fetching value of define "__PCLMUL__" : 1 00:02:20.403 Fetching value of define "__RDRND__" : 1 00:02:20.403 Fetching value of define "__RDSEED__" : 1 00:02:20.403 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:20.403 Fetching value of define "__znver1__" : (undefined) 00:02:20.403 Fetching value of define "__znver2__" : (undefined) 00:02:20.403 Fetching value of define "__znver3__" : (undefined) 00:02:20.403 Fetching value of define "__znver4__" : (undefined) 00:02:20.403 Library asan found: YES 00:02:20.403 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:20.403 Message: lib/log: Defining dependency "log" 00:02:20.403 Message: lib/kvargs: Defining dependency "kvargs" 00:02:20.403 Message: lib/telemetry: Defining dependency "telemetry" 00:02:20.403 Library rt found: YES 00:02:20.403 Checking for function "getentropy" : NO 00:02:20.403 Message: lib/eal: Defining dependency "eal" 00:02:20.403 Message: lib/ring: Defining dependency "ring" 00:02:20.403 Message: lib/rcu: Defining dependency "rcu" 00:02:20.403 Message: lib/mempool: Defining dependency "mempool" 00:02:20.403 Message: lib/mbuf: Defining dependency "mbuf" 00:02:20.403 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:20.403 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:20.403 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:20.403 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:20.403 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:20.403 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:20.403 Compiler for C supports arguments -mpclmul: YES 00:02:20.403 Compiler for C supports arguments -maes: YES 00:02:20.403 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:20.403 Compiler for C supports arguments -mavx512bw: YES 00:02:20.403 Compiler for C supports arguments -mavx512dq: YES 00:02:20.403 Compiler for C supports arguments -mavx512vl: YES 00:02:20.403 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:20.403 Compiler for C supports arguments -mavx2: YES 00:02:20.403 Compiler for C supports arguments -mavx: YES 00:02:20.403 Message: lib/net: Defining dependency "net" 00:02:20.403 Message: lib/meter: Defining dependency "meter" 00:02:20.403 Message: lib/ethdev: Defining dependency "ethdev" 00:02:20.403 Message: lib/pci: Defining dependency "pci" 00:02:20.404 Message: lib/cmdline: Defining dependency "cmdline" 00:02:20.404 Message: lib/hash: Defining dependency "hash" 00:02:20.404 Message: lib/timer: Defining dependency "timer" 00:02:20.404 Message: lib/compressdev: Defining dependency "compressdev" 00:02:20.404 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:20.404 Message: lib/dmadev: Defining dependency "dmadev" 00:02:20.404 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:20.404 Message: lib/power: Defining dependency "power" 00:02:20.404 Message: lib/reorder: Defining dependency "reorder" 00:02:20.404 Message: lib/security: Defining dependency "security" 00:02:20.404 Has header "linux/userfaultfd.h" : YES 00:02:20.404 Has header "linux/vduse.h" : YES 00:02:20.404 Message: lib/vhost: Defining dependency "vhost" 00:02:20.404 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:20.404 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:20.404 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:20.404 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:20.404 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:20.404 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:20.404 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:20.404 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:20.404 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:20.404 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:20.404 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:20.404 Configuring doxy-api-html.conf using configuration 00:02:20.404 Configuring doxy-api-man.conf using configuration 00:02:20.404 Program mandb found: YES (/usr/bin/mandb) 00:02:20.404 Program sphinx-build found: NO 00:02:20.404 Configuring rte_build_config.h using configuration 00:02:20.404 Message: 00:02:20.404 ================= 00:02:20.404 Applications Enabled 00:02:20.404 ================= 00:02:20.404 00:02:20.404 apps: 00:02:20.404 00:02:20.404 00:02:20.404 Message: 00:02:20.404 ================= 00:02:20.404 Libraries Enabled 00:02:20.404 ================= 00:02:20.404 00:02:20.404 libs: 00:02:20.404 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:20.404 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:20.404 cryptodev, dmadev, power, reorder, security, vhost, 00:02:20.404 00:02:20.404 Message: 00:02:20.404 =============== 00:02:20.404 Drivers Enabled 00:02:20.404 =============== 00:02:20.404 00:02:20.404 common: 00:02:20.404 00:02:20.404 bus: 00:02:20.404 pci, vdev, 00:02:20.404 mempool: 00:02:20.404 ring, 00:02:20.404 dma: 00:02:20.404 00:02:20.404 net: 00:02:20.404 00:02:20.404 crypto: 00:02:20.404 00:02:20.404 compress: 00:02:20.404 00:02:20.404 vdpa: 00:02:20.404 00:02:20.404 00:02:20.404 Message: 00:02:20.404 ================= 00:02:20.404 Content Skipped 00:02:20.404 ================= 00:02:20.404 00:02:20.404 apps: 00:02:20.404 dumpcap: explicitly disabled via build config 00:02:20.404 graph: explicitly disabled via build config 00:02:20.404 pdump: explicitly disabled via build config 00:02:20.404 proc-info: explicitly disabled via build config 00:02:20.404 test-acl: explicitly disabled via build config 00:02:20.404 test-bbdev: explicitly disabled via build config 00:02:20.404 test-cmdline: explicitly disabled via build config 00:02:20.404 test-compress-perf: explicitly disabled via build config 00:02:20.404 test-crypto-perf: explicitly disabled via build config 00:02:20.404 test-dma-perf: explicitly disabled via build config 00:02:20.404 test-eventdev: explicitly disabled via build config 00:02:20.404 test-fib: explicitly disabled via build config 00:02:20.404 test-flow-perf: explicitly disabled via build config 00:02:20.404 test-gpudev: explicitly disabled via build config 00:02:20.404 test-mldev: explicitly disabled via build config 00:02:20.404 test-pipeline: explicitly disabled via build config 00:02:20.404 test-pmd: explicitly disabled via build config 00:02:20.404 test-regex: explicitly disabled via build config 00:02:20.404 test-sad: explicitly disabled via build config 00:02:20.404 test-security-perf: explicitly disabled via build config 00:02:20.404 00:02:20.404 libs: 00:02:20.404 argparse: explicitly disabled via build config 00:02:20.404 metrics: explicitly disabled via build config 00:02:20.404 acl: explicitly disabled via build config 00:02:20.404 bbdev: explicitly disabled via build config 00:02:20.404 bitratestats: explicitly disabled via build config 00:02:20.404 bpf: explicitly disabled via build config 00:02:20.404 cfgfile: explicitly disabled via build config 00:02:20.404 distributor: explicitly disabled via build config 00:02:20.404 efd: explicitly disabled via build config 00:02:20.404 eventdev: explicitly disabled via build config 00:02:20.404 dispatcher: explicitly disabled via build config 00:02:20.404 gpudev: explicitly disabled via build config 00:02:20.404 gro: explicitly disabled via build config 00:02:20.404 gso: explicitly disabled via build config 00:02:20.404 ip_frag: explicitly disabled via build config 00:02:20.404 jobstats: explicitly disabled via build config 00:02:20.404 latencystats: explicitly disabled via build config 00:02:20.404 lpm: explicitly disabled via build config 00:02:20.404 member: explicitly disabled via build config 00:02:20.404 pcapng: explicitly disabled via build config 00:02:20.404 rawdev: explicitly disabled via build config 00:02:20.404 regexdev: explicitly disabled via build config 00:02:20.404 mldev: explicitly disabled via build config 00:02:20.404 rib: explicitly disabled via build config 00:02:20.404 sched: explicitly disabled via build config 00:02:20.404 stack: explicitly disabled via build config 00:02:20.404 ipsec: explicitly disabled via build config 00:02:20.404 pdcp: explicitly disabled via build config 00:02:20.404 fib: explicitly disabled via build config 00:02:20.404 port: explicitly disabled via build config 00:02:20.404 pdump: explicitly disabled via build config 00:02:20.404 table: explicitly disabled via build config 00:02:20.404 pipeline: explicitly disabled via build config 00:02:20.404 graph: explicitly disabled via build config 00:02:20.404 node: explicitly disabled via build config 00:02:20.404 00:02:20.404 drivers: 00:02:20.404 common/cpt: not in enabled drivers build config 00:02:20.404 common/dpaax: not in enabled drivers build config 00:02:20.404 common/iavf: not in enabled drivers build config 00:02:20.404 common/idpf: not in enabled drivers build config 00:02:20.404 common/ionic: not in enabled drivers build config 00:02:20.404 common/mvep: not in enabled drivers build config 00:02:20.404 common/octeontx: not in enabled drivers build config 00:02:20.404 bus/auxiliary: not in enabled drivers build config 00:02:20.404 bus/cdx: not in enabled drivers build config 00:02:20.404 bus/dpaa: not in enabled drivers build config 00:02:20.404 bus/fslmc: not in enabled drivers build config 00:02:20.404 bus/ifpga: not in enabled drivers build config 00:02:20.404 bus/platform: not in enabled drivers build config 00:02:20.404 bus/uacce: not in enabled drivers build config 00:02:20.404 bus/vmbus: not in enabled drivers build config 00:02:20.404 common/cnxk: not in enabled drivers build config 00:02:20.404 common/mlx5: not in enabled drivers build config 00:02:20.404 common/nfp: not in enabled drivers build config 00:02:20.404 common/nitrox: not in enabled drivers build config 00:02:20.404 common/qat: not in enabled drivers build config 00:02:20.404 common/sfc_efx: not in enabled drivers build config 00:02:20.404 mempool/bucket: not in enabled drivers build config 00:02:20.404 mempool/cnxk: not in enabled drivers build config 00:02:20.405 mempool/dpaa: not in enabled drivers build config 00:02:20.405 mempool/dpaa2: not in enabled drivers build config 00:02:20.405 mempool/octeontx: not in enabled drivers build config 00:02:20.405 mempool/stack: not in enabled drivers build config 00:02:20.405 dma/cnxk: not in enabled drivers build config 00:02:20.405 dma/dpaa: not in enabled drivers build config 00:02:20.405 dma/dpaa2: not in enabled drivers build config 00:02:20.405 dma/hisilicon: not in enabled drivers build config 00:02:20.405 dma/idxd: not in enabled drivers build config 00:02:20.405 dma/ioat: not in enabled drivers build config 00:02:20.405 dma/skeleton: not in enabled drivers build config 00:02:20.405 net/af_packet: not in enabled drivers build config 00:02:20.405 net/af_xdp: not in enabled drivers build config 00:02:20.405 net/ark: not in enabled drivers build config 00:02:20.405 net/atlantic: not in enabled drivers build config 00:02:20.405 net/avp: not in enabled drivers build config 00:02:20.405 net/axgbe: not in enabled drivers build config 00:02:20.405 net/bnx2x: not in enabled drivers build config 00:02:20.405 net/bnxt: not in enabled drivers build config 00:02:20.405 net/bonding: not in enabled drivers build config 00:02:20.405 net/cnxk: not in enabled drivers build config 00:02:20.405 net/cpfl: not in enabled drivers build config 00:02:20.405 net/cxgbe: not in enabled drivers build config 00:02:20.405 net/dpaa: not in enabled drivers build config 00:02:20.405 net/dpaa2: not in enabled drivers build config 00:02:20.405 net/e1000: not in enabled drivers build config 00:02:20.405 net/ena: not in enabled drivers build config 00:02:20.405 net/enetc: not in enabled drivers build config 00:02:20.405 net/enetfec: not in enabled drivers build config 00:02:20.405 net/enic: not in enabled drivers build config 00:02:20.405 net/failsafe: not in enabled drivers build config 00:02:20.405 net/fm10k: not in enabled drivers build config 00:02:20.405 net/gve: not in enabled drivers build config 00:02:20.405 net/hinic: not in enabled drivers build config 00:02:20.405 net/hns3: not in enabled drivers build config 00:02:20.405 net/i40e: not in enabled drivers build config 00:02:20.405 net/iavf: not in enabled drivers build config 00:02:20.405 net/ice: not in enabled drivers build config 00:02:20.405 net/idpf: not in enabled drivers build config 00:02:20.405 net/igc: not in enabled drivers build config 00:02:20.405 net/ionic: not in enabled drivers build config 00:02:20.405 net/ipn3ke: not in enabled drivers build config 00:02:20.405 net/ixgbe: not in enabled drivers build config 00:02:20.405 net/mana: not in enabled drivers build config 00:02:20.405 net/memif: not in enabled drivers build config 00:02:20.405 net/mlx4: not in enabled drivers build config 00:02:20.405 net/mlx5: not in enabled drivers build config 00:02:20.405 net/mvneta: not in enabled drivers build config 00:02:20.405 net/mvpp2: not in enabled drivers build config 00:02:20.405 net/netvsc: not in enabled drivers build config 00:02:20.405 net/nfb: not in enabled drivers build config 00:02:20.405 net/nfp: not in enabled drivers build config 00:02:20.405 net/ngbe: not in enabled drivers build config 00:02:20.405 net/null: not in enabled drivers build config 00:02:20.405 net/octeontx: not in enabled drivers build config 00:02:20.405 net/octeon_ep: not in enabled drivers build config 00:02:20.405 net/pcap: not in enabled drivers build config 00:02:20.405 net/pfe: not in enabled drivers build config 00:02:20.405 net/qede: not in enabled drivers build config 00:02:20.405 net/ring: not in enabled drivers build config 00:02:20.405 net/sfc: not in enabled drivers build config 00:02:20.405 net/softnic: not in enabled drivers build config 00:02:20.405 net/tap: not in enabled drivers build config 00:02:20.405 net/thunderx: not in enabled drivers build config 00:02:20.405 net/txgbe: not in enabled drivers build config 00:02:20.405 net/vdev_netvsc: not in enabled drivers build config 00:02:20.405 net/vhost: not in enabled drivers build config 00:02:20.405 net/virtio: not in enabled drivers build config 00:02:20.405 net/vmxnet3: not in enabled drivers build config 00:02:20.405 raw/*: missing internal dependency, "rawdev" 00:02:20.405 crypto/armv8: not in enabled drivers build config 00:02:20.405 crypto/bcmfs: not in enabled drivers build config 00:02:20.405 crypto/caam_jr: not in enabled drivers build config 00:02:20.405 crypto/ccp: not in enabled drivers build config 00:02:20.405 crypto/cnxk: not in enabled drivers build config 00:02:20.405 crypto/dpaa_sec: not in enabled drivers build config 00:02:20.405 crypto/dpaa2_sec: not in enabled drivers build config 00:02:20.405 crypto/ipsec_mb: not in enabled drivers build config 00:02:20.405 crypto/mlx5: not in enabled drivers build config 00:02:20.405 crypto/mvsam: not in enabled drivers build config 00:02:20.405 crypto/nitrox: not in enabled drivers build config 00:02:20.405 crypto/null: not in enabled drivers build config 00:02:20.405 crypto/octeontx: not in enabled drivers build config 00:02:20.405 crypto/openssl: not in enabled drivers build config 00:02:20.405 crypto/scheduler: not in enabled drivers build config 00:02:20.405 crypto/uadk: not in enabled drivers build config 00:02:20.405 crypto/virtio: not in enabled drivers build config 00:02:20.405 compress/isal: not in enabled drivers build config 00:02:20.405 compress/mlx5: not in enabled drivers build config 00:02:20.405 compress/nitrox: not in enabled drivers build config 00:02:20.405 compress/octeontx: not in enabled drivers build config 00:02:20.405 compress/zlib: not in enabled drivers build config 00:02:20.405 regex/*: missing internal dependency, "regexdev" 00:02:20.405 ml/*: missing internal dependency, "mldev" 00:02:20.405 vdpa/ifc: not in enabled drivers build config 00:02:20.405 vdpa/mlx5: not in enabled drivers build config 00:02:20.405 vdpa/nfp: not in enabled drivers build config 00:02:20.405 vdpa/sfc: not in enabled drivers build config 00:02:20.405 event/*: missing internal dependency, "eventdev" 00:02:20.405 baseband/*: missing internal dependency, "bbdev" 00:02:20.405 gpu/*: missing internal dependency, "gpudev" 00:02:20.405 00:02:20.405 00:02:20.971 Build targets in project: 85 00:02:20.971 00:02:20.971 DPDK 24.03.0 00:02:20.971 00:02:20.971 User defined options 00:02:20.971 buildtype : debug 00:02:20.971 default_library : shared 00:02:20.971 libdir : lib 00:02:20.971 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:20.971 b_sanitize : address 00:02:20.971 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:20.971 c_link_args : 00:02:20.971 cpu_instruction_set: native 00:02:20.971 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:20.971 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:20.971 enable_docs : false 00:02:20.971 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:20.971 enable_kmods : false 00:02:20.971 max_lcores : 128 00:02:20.971 tests : false 00:02:20.971 00:02:20.971 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:21.229 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:21.486 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:21.486 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:21.486 [3/268] Linking static target lib/librte_kvargs.a 00:02:21.486 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:21.486 [5/268] Linking static target lib/librte_log.a 00:02:21.486 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:21.744 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:21.744 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:21.744 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:22.001 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:22.001 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:22.001 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.001 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:22.001 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:22.001 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:22.001 [16/268] Linking static target lib/librte_telemetry.a 00:02:22.001 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:22.001 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:22.566 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:22.566 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.566 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:22.566 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:22.566 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:22.566 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:22.566 [25/268] Linking target lib/librte_log.so.24.1 00:02:22.566 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:22.566 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:22.566 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:22.566 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:22.823 [30/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:22.823 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:22.823 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.823 [33/268] Linking target lib/librte_kvargs.so.24.1 00:02:22.823 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:23.079 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:23.079 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:23.079 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:23.079 [38/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:23.079 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:23.079 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:23.079 [41/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:23.079 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:23.079 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:23.079 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:23.079 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:23.368 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:23.369 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:23.625 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:23.626 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:23.626 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:23.626 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:23.882 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:23.882 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:23.882 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:23.882 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:23.882 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:23.882 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:24.139 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:24.139 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:24.139 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:24.139 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:24.396 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:24.396 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:24.396 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:24.396 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:24.396 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:24.396 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:24.653 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:24.653 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:24.910 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:24.910 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:24.910 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:24.910 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:24.910 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:24.910 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:24.910 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:24.910 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:24.910 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:24.910 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:25.167 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:25.167 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:25.167 [82/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:25.167 [83/268] Linking static target lib/librte_ring.a 00:02:25.424 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:25.424 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:25.424 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:25.424 [87/268] Linking static target lib/librte_eal.a 00:02:25.682 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:25.682 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:25.682 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:25.682 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:25.682 [92/268] Linking static target lib/librte_rcu.a 00:02:25.682 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:25.682 [94/268] Linking static target lib/librte_mempool.a 00:02:25.682 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:25.682 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:25.938 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.938 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:25.938 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:25.938 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:26.195 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.195 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:26.195 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:26.195 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:26.195 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:26.452 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:26.452 [107/268] Linking static target lib/librte_net.a 00:02:26.452 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:26.452 [109/268] Linking static target lib/librte_meter.a 00:02:26.452 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:26.709 [111/268] Linking static target lib/librte_mbuf.a 00:02:26.709 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:26.709 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:26.709 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.709 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:26.709 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:26.709 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.965 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.223 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:27.223 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:27.482 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:27.482 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:27.482 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:27.738 [124/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.738 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:27.738 [126/268] Linking static target lib/librte_pci.a 00:02:27.738 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:27.738 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:27.995 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:27.995 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:27.995 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:27.995 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:27.995 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:27.995 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:27.995 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:28.252 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.252 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:28.252 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:28.252 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:28.252 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:28.252 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:28.252 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:28.252 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:28.252 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:28.252 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:28.252 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:28.508 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:28.508 [148/268] Linking static target lib/librte_cmdline.a 00:02:28.508 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:28.765 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:28.765 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:28.765 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:28.765 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:28.765 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:28.765 [155/268] Linking static target lib/librte_timer.a 00:02:29.063 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:29.063 [157/268] Linking static target lib/librte_compressdev.a 00:02:29.322 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:29.322 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:29.322 [160/268] Linking static target lib/librte_ethdev.a 00:02:29.322 [161/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:29.322 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:29.579 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:29.579 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.579 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:29.579 [166/268] Linking static target lib/librte_dmadev.a 00:02:29.579 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:29.579 [168/268] Linking static target lib/librte_hash.a 00:02:29.579 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:29.850 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:29.850 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:29.850 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:30.110 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.110 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.110 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:30.110 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:30.110 [177/268] Linking static target lib/librte_cryptodev.a 00:02:30.367 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:30.367 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:30.367 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:30.625 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:30.625 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.625 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:30.625 [184/268] Linking static target lib/librte_power.a 00:02:30.625 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:30.883 [186/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.883 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:30.883 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:30.883 [189/268] Linking static target lib/librte_reorder.a 00:02:31.140 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:31.140 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:31.140 [192/268] Linking static target lib/librte_security.a 00:02:31.140 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:31.398 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.657 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:31.657 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.916 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.916 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:31.916 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:32.175 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:32.175 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:32.175 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:32.433 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:32.433 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:32.433 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:32.692 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:32.692 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.692 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:32.692 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:32.692 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:32.692 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:32.951 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:32.951 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.951 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.951 [215/268] Linking static target drivers/librte_bus_vdev.a 00:02:32.951 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:32.951 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.951 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.951 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:32.951 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:32.951 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:33.211 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:33.211 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:33.211 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:33.211 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:33.211 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.470 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.037 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:37.334 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:37.593 [230/268] Linking static target lib/librte_vhost.a 00:02:37.593 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.593 [232/268] Linking target lib/librte_eal.so.24.1 00:02:37.853 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:37.853 [234/268] Linking target lib/librte_meter.so.24.1 00:02:37.853 [235/268] Linking target lib/librte_timer.so.24.1 00:02:37.853 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:37.853 [237/268] Linking target lib/librte_pci.so.24.1 00:02:37.853 [238/268] Linking target lib/librte_ring.so.24.1 00:02:37.853 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:38.112 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:38.112 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:38.112 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:38.112 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:38.112 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:38.112 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:38.112 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:38.112 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:38.112 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:38.112 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:38.112 [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.112 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:38.112 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:38.370 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:38.370 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:38.370 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:38.370 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:38.370 [257/268] Linking target lib/librte_net.so.24.1 00:02:38.629 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:38.629 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:38.629 [260/268] Linking target lib/librte_hash.so.24.1 00:02:38.629 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:38.629 [262/268] Linking target lib/librte_security.so.24.1 00:02:38.629 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:38.888 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:38.888 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:38.888 [266/268] Linking target lib/librte_power.so.24.1 00:02:39.456 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.714 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:39.714 INFO: autodetecting backend as ninja 00:02:39.714 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:57.831 CC lib/ut/ut.o 00:02:57.831 CC lib/ut_mock/mock.o 00:02:57.831 CC lib/log/log.o 00:02:57.831 CC lib/log/log_flags.o 00:02:57.831 CC lib/log/log_deprecated.o 00:02:57.831 LIB libspdk_ut_mock.a 00:02:57.831 LIB libspdk_ut.a 00:02:57.831 SO libspdk_ut_mock.so.6.0 00:02:57.831 SO libspdk_ut.so.2.0 00:02:57.831 LIB libspdk_log.a 00:02:57.831 SYMLINK libspdk_ut_mock.so 00:02:57.831 SYMLINK libspdk_ut.so 00:02:57.831 SO libspdk_log.so.7.1 00:02:57.831 SYMLINK libspdk_log.so 00:02:57.831 CXX lib/trace_parser/trace.o 00:02:57.831 CC lib/ioat/ioat.o 00:02:57.831 CC lib/util/bit_array.o 00:02:57.831 CC lib/util/base64.o 00:02:57.831 CC lib/util/cpuset.o 00:02:57.831 CC lib/util/crc16.o 00:02:57.831 CC lib/util/crc32.o 00:02:57.831 CC lib/util/crc32c.o 00:02:57.831 CC lib/dma/dma.o 00:02:57.831 CC lib/vfio_user/host/vfio_user_pci.o 00:02:57.831 CC lib/vfio_user/host/vfio_user.o 00:02:57.831 CC lib/util/crc32_ieee.o 00:02:57.831 CC lib/util/crc64.o 00:02:57.831 CC lib/util/dif.o 00:02:57.831 LIB libspdk_dma.a 00:02:57.832 CC lib/util/fd.o 00:02:57.832 CC lib/util/fd_group.o 00:02:57.832 SO libspdk_dma.so.5.0 00:02:57.832 CC lib/util/file.o 00:02:57.832 LIB libspdk_ioat.a 00:02:57.832 CC lib/util/hexlify.o 00:02:57.832 SYMLINK libspdk_dma.so 00:02:57.832 CC lib/util/iov.o 00:02:57.832 SO libspdk_ioat.so.7.0 00:02:57.832 CC lib/util/math.o 00:02:57.832 LIB libspdk_vfio_user.a 00:02:57.832 CC lib/util/net.o 00:02:57.832 SYMLINK libspdk_ioat.so 00:02:57.832 CC lib/util/pipe.o 00:02:57.832 SO libspdk_vfio_user.so.5.0 00:02:57.832 CC lib/util/strerror_tls.o 00:02:57.832 CC lib/util/string.o 00:02:57.832 SYMLINK libspdk_vfio_user.so 00:02:57.832 CC lib/util/uuid.o 00:02:57.832 CC lib/util/xor.o 00:02:57.832 CC lib/util/zipf.o 00:02:57.832 CC lib/util/md5.o 00:02:57.832 LIB libspdk_util.a 00:02:57.832 SO libspdk_util.so.10.0 00:02:57.832 LIB libspdk_trace_parser.a 00:02:57.832 SYMLINK libspdk_util.so 00:02:57.832 SO libspdk_trace_parser.so.6.0 00:02:57.832 SYMLINK libspdk_trace_parser.so 00:02:57.832 CC lib/env_dpdk/env.o 00:02:57.832 CC lib/env_dpdk/memory.o 00:02:57.832 CC lib/env_dpdk/pci.o 00:02:57.832 CC lib/env_dpdk/init.o 00:02:57.832 CC lib/conf/conf.o 00:02:57.832 CC lib/rdma_utils/rdma_utils.o 00:02:57.832 CC lib/rdma_provider/common.o 00:02:57.832 CC lib/vmd/vmd.o 00:02:57.832 CC lib/idxd/idxd.o 00:02:57.832 CC lib/json/json_parse.o 00:02:57.832 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:57.832 LIB libspdk_conf.a 00:02:57.832 CC lib/json/json_util.o 00:02:57.832 SO libspdk_conf.so.6.0 00:02:57.832 LIB libspdk_rdma_utils.a 00:02:57.832 SO libspdk_rdma_utils.so.1.0 00:02:57.832 SYMLINK libspdk_conf.so 00:02:57.832 CC lib/json/json_write.o 00:02:57.832 LIB libspdk_rdma_provider.a 00:02:57.832 CC lib/vmd/led.o 00:02:57.832 SYMLINK libspdk_rdma_utils.so 00:02:57.832 SO libspdk_rdma_provider.so.6.0 00:02:57.832 CC lib/env_dpdk/threads.o 00:02:57.832 CC lib/env_dpdk/pci_ioat.o 00:02:57.832 SYMLINK libspdk_rdma_provider.so 00:02:57.832 CC lib/idxd/idxd_user.o 00:02:57.832 CC lib/idxd/idxd_kernel.o 00:02:57.832 CC lib/env_dpdk/pci_virtio.o 00:02:57.832 CC lib/env_dpdk/pci_vmd.o 00:02:58.091 CC lib/env_dpdk/pci_idxd.o 00:02:58.091 CC lib/env_dpdk/pci_event.o 00:02:58.091 LIB libspdk_json.a 00:02:58.091 CC lib/env_dpdk/sigbus_handler.o 00:02:58.091 CC lib/env_dpdk/pci_dpdk.o 00:02:58.091 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:58.091 SO libspdk_json.so.6.0 00:02:58.091 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:58.091 LIB libspdk_idxd.a 00:02:58.091 LIB libspdk_vmd.a 00:02:58.091 SO libspdk_vmd.so.6.0 00:02:58.091 SYMLINK libspdk_json.so 00:02:58.091 SO libspdk_idxd.so.12.1 00:02:58.091 SYMLINK libspdk_idxd.so 00:02:58.091 SYMLINK libspdk_vmd.so 00:02:58.350 CC lib/jsonrpc/jsonrpc_server.o 00:02:58.350 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:58.350 CC lib/jsonrpc/jsonrpc_client.o 00:02:58.350 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:58.919 LIB libspdk_jsonrpc.a 00:02:58.919 SO libspdk_jsonrpc.so.6.0 00:02:58.919 SYMLINK libspdk_jsonrpc.so 00:02:58.919 LIB libspdk_env_dpdk.a 00:02:59.178 SO libspdk_env_dpdk.so.15.1 00:02:59.178 SYMLINK libspdk_env_dpdk.so 00:02:59.178 CC lib/rpc/rpc.o 00:02:59.437 LIB libspdk_rpc.a 00:02:59.437 SO libspdk_rpc.so.6.0 00:02:59.695 SYMLINK libspdk_rpc.so 00:02:59.953 CC lib/trace/trace.o 00:02:59.953 CC lib/trace/trace_rpc.o 00:02:59.953 CC lib/trace/trace_flags.o 00:02:59.953 CC lib/notify/notify.o 00:02:59.953 CC lib/notify/notify_rpc.o 00:02:59.953 CC lib/keyring/keyring.o 00:02:59.953 CC lib/keyring/keyring_rpc.o 00:03:00.212 LIB libspdk_notify.a 00:03:00.212 SO libspdk_notify.so.6.0 00:03:00.212 LIB libspdk_keyring.a 00:03:00.212 LIB libspdk_trace.a 00:03:00.212 SYMLINK libspdk_notify.so 00:03:00.212 SO libspdk_keyring.so.2.0 00:03:00.471 SO libspdk_trace.so.11.0 00:03:00.471 SYMLINK libspdk_keyring.so 00:03:00.471 SYMLINK libspdk_trace.so 00:03:00.731 CC lib/sock/sock.o 00:03:00.731 CC lib/sock/sock_rpc.o 00:03:00.731 CC lib/thread/thread.o 00:03:00.731 CC lib/thread/iobuf.o 00:03:01.299 LIB libspdk_sock.a 00:03:01.559 SO libspdk_sock.so.10.0 00:03:01.559 SYMLINK libspdk_sock.so 00:03:01.819 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:01.819 CC lib/nvme/nvme_ctrlr.o 00:03:01.819 CC lib/nvme/nvme_fabric.o 00:03:01.819 CC lib/nvme/nvme_ns.o 00:03:01.819 CC lib/nvme/nvme_ns_cmd.o 00:03:01.819 CC lib/nvme/nvme_pcie_common.o 00:03:01.819 CC lib/nvme/nvme_pcie.o 00:03:01.819 CC lib/nvme/nvme_qpair.o 00:03:01.819 CC lib/nvme/nvme.o 00:03:02.387 LIB libspdk_thread.a 00:03:02.646 SO libspdk_thread.so.11.0 00:03:02.646 CC lib/nvme/nvme_quirks.o 00:03:02.646 CC lib/nvme/nvme_transport.o 00:03:02.646 SYMLINK libspdk_thread.so 00:03:02.646 CC lib/nvme/nvme_discovery.o 00:03:02.646 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:02.905 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:02.905 CC lib/nvme/nvme_tcp.o 00:03:02.905 CC lib/accel/accel.o 00:03:02.905 CC lib/accel/accel_rpc.o 00:03:02.905 CC lib/accel/accel_sw.o 00:03:03.164 CC lib/nvme/nvme_opal.o 00:03:03.164 CC lib/nvme/nvme_io_msg.o 00:03:03.164 CC lib/nvme/nvme_poll_group.o 00:03:03.164 CC lib/nvme/nvme_zns.o 00:03:03.423 CC lib/nvme/nvme_stubs.o 00:03:03.423 CC lib/nvme/nvme_auth.o 00:03:03.423 CC lib/nvme/nvme_cuse.o 00:03:03.682 CC lib/nvme/nvme_rdma.o 00:03:03.941 CC lib/blob/blobstore.o 00:03:03.941 CC lib/init/json_config.o 00:03:03.941 CC lib/virtio/virtio.o 00:03:03.941 CC lib/fsdev/fsdev.o 00:03:04.201 LIB libspdk_accel.a 00:03:04.201 CC lib/init/subsystem.o 00:03:04.201 SO libspdk_accel.so.16.0 00:03:04.201 SYMLINK libspdk_accel.so 00:03:04.201 CC lib/init/subsystem_rpc.o 00:03:04.201 CC lib/init/rpc.o 00:03:04.201 CC lib/fsdev/fsdev_io.o 00:03:04.461 CC lib/fsdev/fsdev_rpc.o 00:03:04.461 CC lib/virtio/virtio_vhost_user.o 00:03:04.461 CC lib/blob/request.o 00:03:04.461 CC lib/blob/zeroes.o 00:03:04.461 LIB libspdk_init.a 00:03:04.461 CC lib/bdev/bdev.o 00:03:04.461 CC lib/bdev/bdev_rpc.o 00:03:04.461 SO libspdk_init.so.6.0 00:03:04.720 SYMLINK libspdk_init.so 00:03:04.720 CC lib/bdev/bdev_zone.o 00:03:04.720 CC lib/blob/blob_bs_dev.o 00:03:04.720 LIB libspdk_fsdev.a 00:03:04.720 SO libspdk_fsdev.so.2.0 00:03:04.720 CC lib/bdev/part.o 00:03:04.720 CC lib/virtio/virtio_vfio_user.o 00:03:04.720 CC lib/bdev/scsi_nvme.o 00:03:04.720 SYMLINK libspdk_fsdev.so 00:03:04.720 CC lib/virtio/virtio_pci.o 00:03:04.979 CC lib/event/app.o 00:03:04.979 CC lib/event/reactor.o 00:03:04.979 CC lib/event/log_rpc.o 00:03:04.979 CC lib/event/app_rpc.o 00:03:04.979 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:04.979 CC lib/event/scheduler_static.o 00:03:05.238 LIB libspdk_nvme.a 00:03:05.238 LIB libspdk_virtio.a 00:03:05.238 SO libspdk_virtio.so.7.0 00:03:05.238 SYMLINK libspdk_virtio.so 00:03:05.238 SO libspdk_nvme.so.14.1 00:03:05.498 LIB libspdk_event.a 00:03:05.498 SO libspdk_event.so.14.0 00:03:05.498 SYMLINK libspdk_event.so 00:03:05.498 SYMLINK libspdk_nvme.so 00:03:05.757 LIB libspdk_fuse_dispatcher.a 00:03:05.757 SO libspdk_fuse_dispatcher.so.1.0 00:03:06.016 SYMLINK libspdk_fuse_dispatcher.so 00:03:07.428 LIB libspdk_bdev.a 00:03:07.428 LIB libspdk_blob.a 00:03:07.428 SO libspdk_bdev.so.17.0 00:03:07.428 SO libspdk_blob.so.11.0 00:03:07.687 SYMLINK libspdk_bdev.so 00:03:07.687 SYMLINK libspdk_blob.so 00:03:07.947 CC lib/ublk/ublk.o 00:03:07.947 CC lib/ublk/ublk_rpc.o 00:03:07.947 CC lib/nbd/nbd.o 00:03:07.947 CC lib/scsi/dev.o 00:03:07.947 CC lib/nbd/nbd_rpc.o 00:03:07.947 CC lib/scsi/lun.o 00:03:07.947 CC lib/ftl/ftl_core.o 00:03:07.947 CC lib/nvmf/ctrlr.o 00:03:07.947 CC lib/lvol/lvol.o 00:03:07.947 CC lib/blobfs/blobfs.o 00:03:07.947 CC lib/blobfs/tree.o 00:03:07.947 CC lib/nvmf/ctrlr_discovery.o 00:03:08.222 CC lib/nvmf/ctrlr_bdev.o 00:03:08.222 CC lib/scsi/port.o 00:03:08.222 CC lib/nvmf/subsystem.o 00:03:08.222 CC lib/ftl/ftl_init.o 00:03:08.222 LIB libspdk_nbd.a 00:03:08.222 CC lib/scsi/scsi.o 00:03:08.222 SO libspdk_nbd.so.7.0 00:03:08.490 SYMLINK libspdk_nbd.so 00:03:08.490 CC lib/scsi/scsi_bdev.o 00:03:08.490 CC lib/ftl/ftl_layout.o 00:03:08.490 CC lib/ftl/ftl_debug.o 00:03:08.490 LIB libspdk_ublk.a 00:03:08.490 SO libspdk_ublk.so.3.0 00:03:08.490 CC lib/ftl/ftl_io.o 00:03:08.749 SYMLINK libspdk_ublk.so 00:03:08.749 CC lib/ftl/ftl_sb.o 00:03:08.749 CC lib/ftl/ftl_l2p.o 00:03:08.749 LIB libspdk_blobfs.a 00:03:08.749 CC lib/scsi/scsi_pr.o 00:03:08.749 SO libspdk_blobfs.so.10.0 00:03:08.749 CC lib/nvmf/nvmf.o 00:03:09.008 CC lib/ftl/ftl_l2p_flat.o 00:03:09.008 CC lib/ftl/ftl_nv_cache.o 00:03:09.008 LIB libspdk_lvol.a 00:03:09.008 SYMLINK libspdk_blobfs.so 00:03:09.008 CC lib/scsi/scsi_rpc.o 00:03:09.008 SO libspdk_lvol.so.10.0 00:03:09.008 CC lib/ftl/ftl_band.o 00:03:09.008 CC lib/ftl/ftl_band_ops.o 00:03:09.008 SYMLINK libspdk_lvol.so 00:03:09.008 CC lib/ftl/ftl_writer.o 00:03:09.008 CC lib/scsi/task.o 00:03:09.008 CC lib/ftl/ftl_rq.o 00:03:09.267 CC lib/ftl/ftl_reloc.o 00:03:09.267 LIB libspdk_scsi.a 00:03:09.267 CC lib/ftl/ftl_l2p_cache.o 00:03:09.267 CC lib/nvmf/nvmf_rpc.o 00:03:09.267 CC lib/ftl/ftl_p2l.o 00:03:09.267 SO libspdk_scsi.so.9.0 00:03:09.267 CC lib/nvmf/transport.o 00:03:09.527 SYMLINK libspdk_scsi.so 00:03:09.527 CC lib/nvmf/tcp.o 00:03:09.527 CC lib/ftl/ftl_p2l_log.o 00:03:09.527 CC lib/ftl/mngt/ftl_mngt.o 00:03:09.786 CC lib/nvmf/stubs.o 00:03:09.786 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:09.786 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:09.786 CC lib/iscsi/conn.o 00:03:10.046 CC lib/iscsi/init_grp.o 00:03:10.046 CC lib/vhost/vhost.o 00:03:10.046 CC lib/vhost/vhost_rpc.o 00:03:10.046 CC lib/vhost/vhost_scsi.o 00:03:10.046 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:10.046 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:10.305 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:10.305 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:10.305 CC lib/nvmf/mdns_server.o 00:03:10.305 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:10.305 CC lib/nvmf/rdma.o 00:03:10.564 CC lib/iscsi/iscsi.o 00:03:10.564 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:10.564 CC lib/iscsi/param.o 00:03:10.564 CC lib/iscsi/portal_grp.o 00:03:10.564 CC lib/iscsi/tgt_node.o 00:03:10.564 CC lib/nvmf/auth.o 00:03:10.823 CC lib/iscsi/iscsi_subsystem.o 00:03:10.823 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:10.823 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:10.823 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:11.083 CC lib/iscsi/iscsi_rpc.o 00:03:11.083 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:11.083 CC lib/vhost/vhost_blk.o 00:03:11.083 CC lib/iscsi/task.o 00:03:11.342 CC lib/ftl/utils/ftl_conf.o 00:03:11.342 CC lib/vhost/rte_vhost_user.o 00:03:11.342 CC lib/ftl/utils/ftl_md.o 00:03:11.342 CC lib/ftl/utils/ftl_mempool.o 00:03:11.342 CC lib/ftl/utils/ftl_bitmap.o 00:03:11.342 CC lib/ftl/utils/ftl_property.o 00:03:11.342 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:11.342 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:11.601 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:11.601 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:11.601 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:11.601 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:11.601 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:11.601 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:11.860 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:11.860 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:11.860 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:11.860 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:11.860 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:11.860 CC lib/ftl/base/ftl_base_dev.o 00:03:11.860 CC lib/ftl/base/ftl_base_bdev.o 00:03:12.119 CC lib/ftl/ftl_trace.o 00:03:12.119 LIB libspdk_iscsi.a 00:03:12.119 SO libspdk_iscsi.so.8.0 00:03:12.119 LIB libspdk_ftl.a 00:03:12.397 LIB libspdk_vhost.a 00:03:12.397 SYMLINK libspdk_iscsi.so 00:03:12.397 SO libspdk_vhost.so.8.0 00:03:12.397 SYMLINK libspdk_vhost.so 00:03:12.673 SO libspdk_ftl.so.9.0 00:03:12.933 SYMLINK libspdk_ftl.so 00:03:12.933 LIB libspdk_nvmf.a 00:03:13.193 SO libspdk_nvmf.so.20.0 00:03:13.452 SYMLINK libspdk_nvmf.so 00:03:13.711 CC module/env_dpdk/env_dpdk_rpc.o 00:03:13.970 CC module/sock/posix/posix.o 00:03:13.970 CC module/accel/error/accel_error.o 00:03:13.970 CC module/accel/dsa/accel_dsa.o 00:03:13.970 CC module/accel/iaa/accel_iaa.o 00:03:13.970 CC module/keyring/file/keyring.o 00:03:13.970 CC module/blob/bdev/blob_bdev.o 00:03:13.970 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:13.970 CC module/accel/ioat/accel_ioat.o 00:03:13.970 CC module/fsdev/aio/fsdev_aio.o 00:03:13.970 LIB libspdk_env_dpdk_rpc.a 00:03:13.970 SO libspdk_env_dpdk_rpc.so.6.0 00:03:13.970 SYMLINK libspdk_env_dpdk_rpc.so 00:03:13.970 CC module/keyring/file/keyring_rpc.o 00:03:13.970 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:13.970 CC module/accel/error/accel_error_rpc.o 00:03:13.970 CC module/accel/ioat/accel_ioat_rpc.o 00:03:13.970 LIB libspdk_scheduler_dynamic.a 00:03:13.970 CC module/accel/iaa/accel_iaa_rpc.o 00:03:14.229 SO libspdk_scheduler_dynamic.so.4.0 00:03:14.229 LIB libspdk_keyring_file.a 00:03:14.229 CC module/accel/dsa/accel_dsa_rpc.o 00:03:14.229 LIB libspdk_blob_bdev.a 00:03:14.229 CC module/fsdev/aio/linux_aio_mgr.o 00:03:14.229 SYMLINK libspdk_scheduler_dynamic.so 00:03:14.229 LIB libspdk_accel_error.a 00:03:14.229 SO libspdk_keyring_file.so.2.0 00:03:14.229 SO libspdk_blob_bdev.so.11.0 00:03:14.229 LIB libspdk_accel_ioat.a 00:03:14.229 SO libspdk_accel_error.so.2.0 00:03:14.229 LIB libspdk_accel_iaa.a 00:03:14.229 SO libspdk_accel_ioat.so.6.0 00:03:14.229 SYMLINK libspdk_keyring_file.so 00:03:14.229 SO libspdk_accel_iaa.so.3.0 00:03:14.229 SYMLINK libspdk_blob_bdev.so 00:03:14.229 SYMLINK libspdk_accel_error.so 00:03:14.229 LIB libspdk_accel_dsa.a 00:03:14.229 SYMLINK libspdk_accel_ioat.so 00:03:14.229 SYMLINK libspdk_accel_iaa.so 00:03:14.229 SO libspdk_accel_dsa.so.5.0 00:03:14.229 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:14.488 SYMLINK libspdk_accel_dsa.so 00:03:14.488 CC module/keyring/linux/keyring.o 00:03:14.488 CC module/scheduler/gscheduler/gscheduler.o 00:03:14.488 LIB libspdk_scheduler_dpdk_governor.a 00:03:14.488 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:14.488 CC module/bdev/gpt/gpt.o 00:03:14.488 CC module/bdev/delay/vbdev_delay.o 00:03:14.488 CC module/bdev/error/vbdev_error.o 00:03:14.488 CC module/blobfs/bdev/blobfs_bdev.o 00:03:14.488 CC module/keyring/linux/keyring_rpc.o 00:03:14.488 CC module/bdev/lvol/vbdev_lvol.o 00:03:14.488 LIB libspdk_scheduler_gscheduler.a 00:03:14.488 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:14.747 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:14.747 SO libspdk_scheduler_gscheduler.so.4.0 00:03:14.747 LIB libspdk_fsdev_aio.a 00:03:14.747 LIB libspdk_sock_posix.a 00:03:14.747 SYMLINK libspdk_scheduler_gscheduler.so 00:03:14.747 SO libspdk_fsdev_aio.so.1.0 00:03:14.747 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:14.747 SO libspdk_sock_posix.so.6.0 00:03:14.747 LIB libspdk_keyring_linux.a 00:03:14.747 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:14.747 SO libspdk_keyring_linux.so.1.0 00:03:14.747 CC module/bdev/gpt/vbdev_gpt.o 00:03:14.747 LIB libspdk_blobfs_bdev.a 00:03:14.747 SYMLINK libspdk_fsdev_aio.so 00:03:14.747 SO libspdk_blobfs_bdev.so.6.0 00:03:14.747 SYMLINK libspdk_sock_posix.so 00:03:14.747 SYMLINK libspdk_keyring_linux.so 00:03:14.747 CC module/bdev/error/vbdev_error_rpc.o 00:03:15.006 SYMLINK libspdk_blobfs_bdev.so 00:03:15.006 LIB libspdk_bdev_delay.a 00:03:15.006 LIB libspdk_bdev_error.a 00:03:15.006 SO libspdk_bdev_delay.so.6.0 00:03:15.006 CC module/bdev/malloc/bdev_malloc.o 00:03:15.006 CC module/bdev/null/bdev_null.o 00:03:15.006 CC module/bdev/nvme/bdev_nvme.o 00:03:15.006 SO libspdk_bdev_error.so.6.0 00:03:15.006 CC module/bdev/raid/bdev_raid.o 00:03:15.006 LIB libspdk_bdev_gpt.a 00:03:15.006 CC module/bdev/passthru/vbdev_passthru.o 00:03:15.006 SYMLINK libspdk_bdev_delay.so 00:03:15.006 SO libspdk_bdev_gpt.so.6.0 00:03:15.006 SYMLINK libspdk_bdev_error.so 00:03:15.265 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:15.265 LIB libspdk_bdev_lvol.a 00:03:15.265 SYMLINK libspdk_bdev_gpt.so 00:03:15.265 SO libspdk_bdev_lvol.so.6.0 00:03:15.265 CC module/bdev/split/vbdev_split.o 00:03:15.265 SYMLINK libspdk_bdev_lvol.so 00:03:15.265 CC module/bdev/split/vbdev_split_rpc.o 00:03:15.265 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:15.265 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:15.265 CC module/bdev/null/bdev_null_rpc.o 00:03:15.265 CC module/bdev/xnvme/bdev_xnvme.o 00:03:15.523 LIB libspdk_bdev_passthru.a 00:03:15.523 SO libspdk_bdev_passthru.so.6.0 00:03:15.523 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:15.523 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:15.523 LIB libspdk_bdev_null.a 00:03:15.523 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:15.523 LIB libspdk_bdev_split.a 00:03:15.523 SYMLINK libspdk_bdev_passthru.so 00:03:15.523 SO libspdk_bdev_split.so.6.0 00:03:15.523 SO libspdk_bdev_null.so.6.0 00:03:15.523 SYMLINK libspdk_bdev_null.so 00:03:15.523 SYMLINK libspdk_bdev_split.so 00:03:15.523 CC module/bdev/nvme/nvme_rpc.o 00:03:15.523 LIB libspdk_bdev_malloc.a 00:03:15.781 LIB libspdk_bdev_zone_block.a 00:03:15.781 SO libspdk_bdev_malloc.so.6.0 00:03:15.781 LIB libspdk_bdev_xnvme.a 00:03:15.781 CC module/bdev/aio/bdev_aio.o 00:03:15.781 SO libspdk_bdev_zone_block.so.6.0 00:03:15.781 SO libspdk_bdev_xnvme.so.3.0 00:03:15.781 SYMLINK libspdk_bdev_malloc.so 00:03:15.781 CC module/bdev/aio/bdev_aio_rpc.o 00:03:15.781 SYMLINK libspdk_bdev_zone_block.so 00:03:15.781 CC module/bdev/raid/bdev_raid_rpc.o 00:03:15.781 CC module/bdev/ftl/bdev_ftl.o 00:03:15.781 SYMLINK libspdk_bdev_xnvme.so 00:03:15.781 CC module/bdev/raid/bdev_raid_sb.o 00:03:15.782 CC module/bdev/iscsi/bdev_iscsi.o 00:03:15.782 CC module/bdev/nvme/bdev_mdns_client.o 00:03:16.039 CC module/bdev/nvme/vbdev_opal.o 00:03:16.039 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:16.039 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:16.039 LIB libspdk_bdev_aio.a 00:03:16.039 SO libspdk_bdev_aio.so.6.0 00:03:16.039 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:16.039 SYMLINK libspdk_bdev_aio.so 00:03:16.039 CC module/bdev/raid/raid0.o 00:03:16.039 CC module/bdev/raid/raid1.o 00:03:16.299 CC module/bdev/raid/concat.o 00:03:16.299 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:16.299 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:16.299 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:16.299 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:16.299 LIB libspdk_bdev_ftl.a 00:03:16.299 SO libspdk_bdev_ftl.so.6.0 00:03:16.299 LIB libspdk_bdev_iscsi.a 00:03:16.299 SYMLINK libspdk_bdev_ftl.so 00:03:16.299 SO libspdk_bdev_iscsi.so.6.0 00:03:16.558 LIB libspdk_bdev_raid.a 00:03:16.558 SYMLINK libspdk_bdev_iscsi.so 00:03:16.558 SO libspdk_bdev_raid.so.6.0 00:03:16.558 SYMLINK libspdk_bdev_raid.so 00:03:16.816 LIB libspdk_bdev_virtio.a 00:03:16.816 SO libspdk_bdev_virtio.so.6.0 00:03:16.816 SYMLINK libspdk_bdev_virtio.so 00:03:17.754 LIB libspdk_bdev_nvme.a 00:03:18.013 SO libspdk_bdev_nvme.so.7.0 00:03:18.013 SYMLINK libspdk_bdev_nvme.so 00:03:18.583 CC module/event/subsystems/fsdev/fsdev.o 00:03:18.583 CC module/event/subsystems/scheduler/scheduler.o 00:03:18.583 CC module/event/subsystems/iobuf/iobuf.o 00:03:18.583 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:18.583 CC module/event/subsystems/sock/sock.o 00:03:18.583 CC module/event/subsystems/vmd/vmd.o 00:03:18.583 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:18.583 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:18.583 CC module/event/subsystems/keyring/keyring.o 00:03:18.843 LIB libspdk_event_keyring.a 00:03:18.843 LIB libspdk_event_fsdev.a 00:03:18.843 LIB libspdk_event_vhost_blk.a 00:03:18.843 LIB libspdk_event_vmd.a 00:03:18.843 LIB libspdk_event_sock.a 00:03:18.843 LIB libspdk_event_scheduler.a 00:03:18.843 LIB libspdk_event_iobuf.a 00:03:18.843 SO libspdk_event_keyring.so.1.0 00:03:18.843 SO libspdk_event_fsdev.so.1.0 00:03:18.843 SO libspdk_event_vhost_blk.so.3.0 00:03:18.843 SO libspdk_event_vmd.so.6.0 00:03:18.843 SO libspdk_event_sock.so.5.0 00:03:18.843 SO libspdk_event_scheduler.so.4.0 00:03:18.843 SO libspdk_event_iobuf.so.3.0 00:03:18.843 SYMLINK libspdk_event_keyring.so 00:03:18.843 SYMLINK libspdk_event_fsdev.so 00:03:18.843 SYMLINK libspdk_event_vhost_blk.so 00:03:18.843 SYMLINK libspdk_event_sock.so 00:03:18.843 SYMLINK libspdk_event_scheduler.so 00:03:18.843 SYMLINK libspdk_event_vmd.so 00:03:18.843 SYMLINK libspdk_event_iobuf.so 00:03:19.412 CC module/event/subsystems/accel/accel.o 00:03:19.412 LIB libspdk_event_accel.a 00:03:19.412 SO libspdk_event_accel.so.6.0 00:03:19.671 SYMLINK libspdk_event_accel.so 00:03:19.931 CC module/event/subsystems/bdev/bdev.o 00:03:20.191 LIB libspdk_event_bdev.a 00:03:20.191 SO libspdk_event_bdev.so.6.0 00:03:20.191 SYMLINK libspdk_event_bdev.so 00:03:20.450 CC module/event/subsystems/nbd/nbd.o 00:03:20.744 CC module/event/subsystems/scsi/scsi.o 00:03:20.744 CC module/event/subsystems/ublk/ublk.o 00:03:20.744 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:20.744 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:20.744 LIB libspdk_event_nbd.a 00:03:20.744 SO libspdk_event_nbd.so.6.0 00:03:20.744 LIB libspdk_event_scsi.a 00:03:20.744 LIB libspdk_event_ublk.a 00:03:20.744 SO libspdk_event_scsi.so.6.0 00:03:20.744 SYMLINK libspdk_event_nbd.so 00:03:20.744 SO libspdk_event_ublk.so.3.0 00:03:20.744 SYMLINK libspdk_event_scsi.so 00:03:20.744 LIB libspdk_event_nvmf.a 00:03:21.003 SYMLINK libspdk_event_ublk.so 00:03:21.003 SO libspdk_event_nvmf.so.6.0 00:03:21.003 SYMLINK libspdk_event_nvmf.so 00:03:21.263 CC module/event/subsystems/iscsi/iscsi.o 00:03:21.263 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:21.522 LIB libspdk_event_vhost_scsi.a 00:03:21.522 LIB libspdk_event_iscsi.a 00:03:21.522 SO libspdk_event_iscsi.so.6.0 00:03:21.522 SO libspdk_event_vhost_scsi.so.3.0 00:03:21.522 SYMLINK libspdk_event_iscsi.so 00:03:21.522 SYMLINK libspdk_event_vhost_scsi.so 00:03:21.781 SO libspdk.so.6.0 00:03:21.781 SYMLINK libspdk.so 00:03:22.039 CC app/spdk_nvme_identify/identify.o 00:03:22.039 CXX app/trace/trace.o 00:03:22.039 CC app/spdk_lspci/spdk_lspci.o 00:03:22.039 CC app/spdk_nvme_perf/perf.o 00:03:22.039 CC app/trace_record/trace_record.o 00:03:22.039 CC app/nvmf_tgt/nvmf_main.o 00:03:22.039 CC app/iscsi_tgt/iscsi_tgt.o 00:03:22.039 CC app/spdk_tgt/spdk_tgt.o 00:03:22.039 CC examples/util/zipf/zipf.o 00:03:22.039 CC test/thread/poller_perf/poller_perf.o 00:03:22.039 LINK spdk_lspci 00:03:22.297 LINK nvmf_tgt 00:03:22.297 LINK zipf 00:03:22.297 LINK iscsi_tgt 00:03:22.297 LINK poller_perf 00:03:22.297 LINK spdk_trace_record 00:03:22.297 LINK spdk_tgt 00:03:22.297 LINK spdk_trace 00:03:22.297 CC app/spdk_nvme_discover/discovery_aer.o 00:03:22.557 TEST_HEADER include/spdk/accel.h 00:03:22.557 TEST_HEADER include/spdk/accel_module.h 00:03:22.557 TEST_HEADER include/spdk/assert.h 00:03:22.557 TEST_HEADER include/spdk/barrier.h 00:03:22.557 TEST_HEADER include/spdk/base64.h 00:03:22.557 TEST_HEADER include/spdk/bdev.h 00:03:22.558 TEST_HEADER include/spdk/bdev_module.h 00:03:22.558 TEST_HEADER include/spdk/bdev_zone.h 00:03:22.558 TEST_HEADER include/spdk/bit_array.h 00:03:22.558 TEST_HEADER include/spdk/bit_pool.h 00:03:22.558 TEST_HEADER include/spdk/blob_bdev.h 00:03:22.558 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:22.558 CC app/spdk_top/spdk_top.o 00:03:22.558 TEST_HEADER include/spdk/blobfs.h 00:03:22.558 TEST_HEADER include/spdk/blob.h 00:03:22.558 TEST_HEADER include/spdk/conf.h 00:03:22.558 TEST_HEADER include/spdk/config.h 00:03:22.558 TEST_HEADER include/spdk/cpuset.h 00:03:22.558 TEST_HEADER include/spdk/crc16.h 00:03:22.558 TEST_HEADER include/spdk/crc32.h 00:03:22.558 TEST_HEADER include/spdk/crc64.h 00:03:22.558 TEST_HEADER include/spdk/dif.h 00:03:22.558 TEST_HEADER include/spdk/dma.h 00:03:22.558 TEST_HEADER include/spdk/endian.h 00:03:22.558 TEST_HEADER include/spdk/env_dpdk.h 00:03:22.558 CC examples/ioat/perf/perf.o 00:03:22.558 TEST_HEADER include/spdk/env.h 00:03:22.558 TEST_HEADER include/spdk/event.h 00:03:22.558 TEST_HEADER include/spdk/fd_group.h 00:03:22.558 TEST_HEADER include/spdk/fd.h 00:03:22.558 TEST_HEADER include/spdk/file.h 00:03:22.558 TEST_HEADER include/spdk/fsdev.h 00:03:22.558 TEST_HEADER include/spdk/fsdev_module.h 00:03:22.558 TEST_HEADER include/spdk/ftl.h 00:03:22.558 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:22.558 TEST_HEADER include/spdk/gpt_spec.h 00:03:22.558 LINK spdk_nvme_discover 00:03:22.558 TEST_HEADER include/spdk/hexlify.h 00:03:22.558 TEST_HEADER include/spdk/histogram_data.h 00:03:22.558 TEST_HEADER include/spdk/idxd.h 00:03:22.558 TEST_HEADER include/spdk/idxd_spec.h 00:03:22.558 TEST_HEADER include/spdk/init.h 00:03:22.558 TEST_HEADER include/spdk/ioat.h 00:03:22.558 TEST_HEADER include/spdk/ioat_spec.h 00:03:22.558 CC test/app/bdev_svc/bdev_svc.o 00:03:22.558 TEST_HEADER include/spdk/iscsi_spec.h 00:03:22.558 TEST_HEADER include/spdk/json.h 00:03:22.558 TEST_HEADER include/spdk/jsonrpc.h 00:03:22.558 CC test/dma/test_dma/test_dma.o 00:03:22.558 TEST_HEADER include/spdk/keyring.h 00:03:22.558 TEST_HEADER include/spdk/keyring_module.h 00:03:22.558 CC test/app/histogram_perf/histogram_perf.o 00:03:22.558 TEST_HEADER include/spdk/likely.h 00:03:22.558 TEST_HEADER include/spdk/log.h 00:03:22.558 TEST_HEADER include/spdk/lvol.h 00:03:22.558 TEST_HEADER include/spdk/md5.h 00:03:22.558 TEST_HEADER include/spdk/memory.h 00:03:22.558 TEST_HEADER include/spdk/mmio.h 00:03:22.558 TEST_HEADER include/spdk/nbd.h 00:03:22.558 TEST_HEADER include/spdk/net.h 00:03:22.558 TEST_HEADER include/spdk/notify.h 00:03:22.558 TEST_HEADER include/spdk/nvme.h 00:03:22.558 TEST_HEADER include/spdk/nvme_intel.h 00:03:22.558 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:22.558 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:22.558 TEST_HEADER include/spdk/nvme_spec.h 00:03:22.558 TEST_HEADER include/spdk/nvme_zns.h 00:03:22.558 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:22.558 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:22.558 TEST_HEADER include/spdk/nvmf.h 00:03:22.816 TEST_HEADER include/spdk/nvmf_spec.h 00:03:22.817 TEST_HEADER include/spdk/nvmf_transport.h 00:03:22.817 TEST_HEADER include/spdk/opal.h 00:03:22.817 TEST_HEADER include/spdk/opal_spec.h 00:03:22.817 TEST_HEADER include/spdk/pci_ids.h 00:03:22.817 TEST_HEADER include/spdk/pipe.h 00:03:22.817 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:22.817 TEST_HEADER include/spdk/queue.h 00:03:22.817 TEST_HEADER include/spdk/reduce.h 00:03:22.817 TEST_HEADER include/spdk/rpc.h 00:03:22.817 TEST_HEADER include/spdk/scheduler.h 00:03:22.817 TEST_HEADER include/spdk/scsi.h 00:03:22.817 TEST_HEADER include/spdk/scsi_spec.h 00:03:22.817 TEST_HEADER include/spdk/sock.h 00:03:22.817 TEST_HEADER include/spdk/stdinc.h 00:03:22.817 TEST_HEADER include/spdk/string.h 00:03:22.817 TEST_HEADER include/spdk/thread.h 00:03:22.817 TEST_HEADER include/spdk/trace.h 00:03:22.817 TEST_HEADER include/spdk/trace_parser.h 00:03:22.817 TEST_HEADER include/spdk/tree.h 00:03:22.817 TEST_HEADER include/spdk/ublk.h 00:03:22.817 TEST_HEADER include/spdk/util.h 00:03:22.817 TEST_HEADER include/spdk/uuid.h 00:03:22.817 TEST_HEADER include/spdk/version.h 00:03:22.817 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:22.817 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:22.817 TEST_HEADER include/spdk/vhost.h 00:03:22.817 TEST_HEADER include/spdk/vmd.h 00:03:22.817 TEST_HEADER include/spdk/xor.h 00:03:22.817 TEST_HEADER include/spdk/zipf.h 00:03:22.817 CXX test/cpp_headers/accel.o 00:03:22.817 LINK histogram_perf 00:03:22.817 LINK bdev_svc 00:03:22.817 LINK ioat_perf 00:03:22.817 CC test/app/jsoncat/jsoncat.o 00:03:22.817 CXX test/cpp_headers/accel_module.o 00:03:22.817 LINK spdk_nvme_perf 00:03:22.817 LINK spdk_nvme_identify 00:03:22.817 CXX test/cpp_headers/assert.o 00:03:23.075 LINK jsoncat 00:03:23.075 CC examples/ioat/verify/verify.o 00:03:23.075 CXX test/cpp_headers/barrier.o 00:03:23.075 CC test/app/stub/stub.o 00:03:23.075 LINK nvme_fuzz 00:03:23.075 LINK test_dma 00:03:23.075 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:23.334 CC examples/vmd/lsvmd/lsvmd.o 00:03:23.334 CXX test/cpp_headers/base64.o 00:03:23.334 CC examples/idxd/perf/perf.o 00:03:23.334 LINK stub 00:03:23.334 LINK verify 00:03:23.334 CC examples/thread/thread/thread_ex.o 00:03:23.334 LINK lsvmd 00:03:23.334 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:23.334 LINK interrupt_tgt 00:03:23.334 CXX test/cpp_headers/bdev.o 00:03:23.334 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:23.334 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:23.593 LINK spdk_top 00:03:23.593 CXX test/cpp_headers/bdev_module.o 00:03:23.593 CC app/spdk_dd/spdk_dd.o 00:03:23.593 LINK thread 00:03:23.593 LINK idxd_perf 00:03:23.593 CC examples/vmd/led/led.o 00:03:23.593 CC examples/sock/hello_world/hello_sock.o 00:03:23.852 CC app/fio/nvme/fio_plugin.o 00:03:23.852 LINK led 00:03:23.852 CXX test/cpp_headers/bdev_zone.o 00:03:23.852 CC app/fio/bdev/fio_plugin.o 00:03:23.852 CXX test/cpp_headers/bit_array.o 00:03:23.852 LINK vhost_fuzz 00:03:23.852 LINK spdk_dd 00:03:23.852 LINK hello_sock 00:03:23.852 CXX test/cpp_headers/bit_pool.o 00:03:24.111 CC test/env/vtophys/vtophys.o 00:03:24.111 CC test/env/mem_callbacks/mem_callbacks.o 00:03:24.111 CXX test/cpp_headers/blob_bdev.o 00:03:24.111 LINK vtophys 00:03:24.111 CC test/event/event_perf/event_perf.o 00:03:24.111 CXX test/cpp_headers/blobfs_bdev.o 00:03:24.111 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:24.370 CC test/env/memory/memory_ut.o 00:03:24.370 CXX test/cpp_headers/blobfs.o 00:03:24.370 LINK spdk_bdev 00:03:24.370 LINK event_perf 00:03:24.370 LINK spdk_nvme 00:03:24.370 CC examples/accel/perf/accel_perf.o 00:03:24.370 LINK env_dpdk_post_init 00:03:24.370 CXX test/cpp_headers/blob.o 00:03:24.370 LINK mem_callbacks 00:03:24.630 CC test/rpc_client/rpc_client_test.o 00:03:24.631 CC app/vhost/vhost.o 00:03:24.631 CC test/event/reactor/reactor.o 00:03:24.631 CC test/nvme/aer/aer.o 00:03:24.631 CXX test/cpp_headers/conf.o 00:03:24.631 CC test/env/pci/pci_ut.o 00:03:24.631 LINK reactor 00:03:24.631 LINK vhost 00:03:24.631 LINK rpc_client_test 00:03:24.891 CXX test/cpp_headers/config.o 00:03:24.891 CXX test/cpp_headers/cpuset.o 00:03:24.891 CC test/accel/dif/dif.o 00:03:24.891 LINK accel_perf 00:03:24.891 LINK aer 00:03:24.891 CC test/event/reactor_perf/reactor_perf.o 00:03:24.891 CC test/event/app_repeat/app_repeat.o 00:03:24.891 CXX test/cpp_headers/crc16.o 00:03:24.891 CC test/event/scheduler/scheduler.o 00:03:25.151 LINK pci_ut 00:03:25.151 LINK reactor_perf 00:03:25.151 LINK app_repeat 00:03:25.151 CXX test/cpp_headers/crc32.o 00:03:25.151 CC test/nvme/reset/reset.o 00:03:25.151 LINK iscsi_fuzz 00:03:25.151 LINK scheduler 00:03:25.151 CC examples/blob/hello_world/hello_blob.o 00:03:25.151 CXX test/cpp_headers/crc64.o 00:03:25.411 CC examples/blob/cli/blobcli.o 00:03:25.411 CC test/nvme/sgl/sgl.o 00:03:25.411 CXX test/cpp_headers/dif.o 00:03:25.411 LINK memory_ut 00:03:25.411 LINK reset 00:03:25.411 CC examples/nvme/hello_world/hello_world.o 00:03:25.411 CC examples/nvme/reconnect/reconnect.o 00:03:25.411 LINK hello_blob 00:03:25.411 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:25.411 LINK dif 00:03:25.671 LINK sgl 00:03:25.671 CXX test/cpp_headers/dma.o 00:03:25.671 CC examples/nvme/arbitration/arbitration.o 00:03:25.671 LINK hello_world 00:03:25.671 CC examples/nvme/hotplug/hotplug.o 00:03:25.671 CXX test/cpp_headers/endian.o 00:03:25.930 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:25.930 LINK blobcli 00:03:25.930 LINK reconnect 00:03:25.930 CXX test/cpp_headers/env_dpdk.o 00:03:25.930 CC test/nvme/e2edp/nvme_dp.o 00:03:25.930 LINK hotplug 00:03:25.930 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:25.930 CXX test/cpp_headers/env.o 00:03:25.930 LINK cmb_copy 00:03:25.930 LINK arbitration 00:03:25.930 CXX test/cpp_headers/event.o 00:03:25.930 CC examples/bdev/hello_world/hello_bdev.o 00:03:26.189 LINK nvme_manage 00:03:26.189 LINK nvme_dp 00:03:26.189 CXX test/cpp_headers/fd_group.o 00:03:26.189 CXX test/cpp_headers/fd.o 00:03:26.189 CXX test/cpp_headers/file.o 00:03:26.189 CC test/blobfs/mkfs/mkfs.o 00:03:26.189 CC examples/nvme/abort/abort.o 00:03:26.189 LINK hello_fsdev 00:03:26.189 LINK hello_bdev 00:03:26.448 CXX test/cpp_headers/fsdev.o 00:03:26.448 CC test/lvol/esnap/esnap.o 00:03:26.448 CC test/nvme/overhead/overhead.o 00:03:26.448 CC test/bdev/bdevio/bdevio.o 00:03:26.448 CXX test/cpp_headers/fsdev_module.o 00:03:26.448 LINK mkfs 00:03:26.448 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:26.448 CXX test/cpp_headers/ftl.o 00:03:26.448 CC test/nvme/err_injection/err_injection.o 00:03:26.448 CC examples/bdev/bdevperf/bdevperf.o 00:03:26.708 LINK pmr_persistence 00:03:26.708 LINK abort 00:03:26.708 CC test/nvme/startup/startup.o 00:03:26.708 LINK overhead 00:03:26.708 CC test/nvme/reserve/reserve.o 00:03:26.708 CXX test/cpp_headers/fuse_dispatcher.o 00:03:26.708 LINK err_injection 00:03:26.708 LINK bdevio 00:03:26.708 CXX test/cpp_headers/gpt_spec.o 00:03:26.708 CXX test/cpp_headers/hexlify.o 00:03:26.708 LINK startup 00:03:26.967 CC test/nvme/simple_copy/simple_copy.o 00:03:26.967 CXX test/cpp_headers/histogram_data.o 00:03:26.967 LINK reserve 00:03:26.967 CXX test/cpp_headers/idxd.o 00:03:26.967 CC test/nvme/connect_stress/connect_stress.o 00:03:26.967 CXX test/cpp_headers/idxd_spec.o 00:03:26.967 CC test/nvme/boot_partition/boot_partition.o 00:03:26.967 CC test/nvme/compliance/nvme_compliance.o 00:03:26.967 CXX test/cpp_headers/init.o 00:03:27.226 LINK simple_copy 00:03:27.226 CXX test/cpp_headers/ioat.o 00:03:27.226 LINK connect_stress 00:03:27.226 CC test/nvme/fused_ordering/fused_ordering.o 00:03:27.226 LINK boot_partition 00:03:27.226 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:27.226 CXX test/cpp_headers/ioat_spec.o 00:03:27.226 CXX test/cpp_headers/iscsi_spec.o 00:03:27.226 CXX test/cpp_headers/json.o 00:03:27.226 LINK fused_ordering 00:03:27.486 LINK doorbell_aers 00:03:27.486 CC test/nvme/fdp/fdp.o 00:03:27.486 CC test/nvme/cuse/cuse.o 00:03:27.486 LINK nvme_compliance 00:03:27.486 CXX test/cpp_headers/jsonrpc.o 00:03:27.486 CXX test/cpp_headers/keyring.o 00:03:27.486 CXX test/cpp_headers/keyring_module.o 00:03:27.486 LINK bdevperf 00:03:27.486 CXX test/cpp_headers/likely.o 00:03:27.486 CXX test/cpp_headers/log.o 00:03:27.486 CXX test/cpp_headers/lvol.o 00:03:27.486 CXX test/cpp_headers/md5.o 00:03:27.745 CXX test/cpp_headers/memory.o 00:03:27.745 CXX test/cpp_headers/mmio.o 00:03:27.745 CXX test/cpp_headers/nbd.o 00:03:27.745 CXX test/cpp_headers/net.o 00:03:27.745 CXX test/cpp_headers/notify.o 00:03:27.745 LINK fdp 00:03:27.745 CXX test/cpp_headers/nvme.o 00:03:27.745 CXX test/cpp_headers/nvme_intel.o 00:03:27.745 CXX test/cpp_headers/nvme_ocssd.o 00:03:27.745 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:27.745 CXX test/cpp_headers/nvme_spec.o 00:03:27.745 CXX test/cpp_headers/nvme_zns.o 00:03:28.005 CXX test/cpp_headers/nvmf_cmd.o 00:03:28.005 CC examples/nvmf/nvmf/nvmf.o 00:03:28.005 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:28.005 CXX test/cpp_headers/nvmf.o 00:03:28.005 CXX test/cpp_headers/nvmf_spec.o 00:03:28.005 CXX test/cpp_headers/nvmf_transport.o 00:03:28.005 CXX test/cpp_headers/opal.o 00:03:28.005 CXX test/cpp_headers/opal_spec.o 00:03:28.005 CXX test/cpp_headers/pci_ids.o 00:03:28.005 CXX test/cpp_headers/pipe.o 00:03:28.005 CXX test/cpp_headers/queue.o 00:03:28.005 CXX test/cpp_headers/reduce.o 00:03:28.270 CXX test/cpp_headers/rpc.o 00:03:28.270 LINK nvmf 00:03:28.270 CXX test/cpp_headers/scsi.o 00:03:28.270 CXX test/cpp_headers/scheduler.o 00:03:28.270 CXX test/cpp_headers/scsi_spec.o 00:03:28.270 CXX test/cpp_headers/sock.o 00:03:28.270 CXX test/cpp_headers/stdinc.o 00:03:28.270 CXX test/cpp_headers/string.o 00:03:28.270 CXX test/cpp_headers/thread.o 00:03:28.270 CXX test/cpp_headers/trace.o 00:03:28.270 CXX test/cpp_headers/trace_parser.o 00:03:28.528 CXX test/cpp_headers/tree.o 00:03:28.528 CXX test/cpp_headers/ublk.o 00:03:28.528 CXX test/cpp_headers/util.o 00:03:28.528 CXX test/cpp_headers/uuid.o 00:03:28.528 CXX test/cpp_headers/version.o 00:03:28.528 CXX test/cpp_headers/vfio_user_pci.o 00:03:28.528 CXX test/cpp_headers/vfio_user_spec.o 00:03:28.528 CXX test/cpp_headers/vhost.o 00:03:28.528 CXX test/cpp_headers/vmd.o 00:03:28.528 CXX test/cpp_headers/xor.o 00:03:28.528 CXX test/cpp_headers/zipf.o 00:03:28.787 LINK cuse 00:03:32.077 LINK esnap 00:03:32.336 00:03:32.336 real 1m22.817s 00:03:32.336 user 7m4.308s 00:03:32.336 sys 1m49.368s 00:03:32.336 ************************************ 00:03:32.336 END TEST make 00:03:32.336 ************************************ 00:03:32.336 15:08:15 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:32.336 15:08:15 make -- common/autotest_common.sh@10 -- $ set +x 00:03:32.595 15:08:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:32.595 15:08:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:32.595 15:08:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:32.595 15:08:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.595 15:08:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:32.595 15:08:15 -- pm/common@44 -- $ pid=5280 00:03:32.595 15:08:15 -- pm/common@50 -- $ kill -TERM 5280 00:03:32.595 15:08:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.595 15:08:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:32.595 15:08:15 -- pm/common@44 -- $ pid=5282 00:03:32.595 15:08:15 -- pm/common@50 -- $ kill -TERM 5282 00:03:32.595 15:08:15 -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:03:32.595 15:08:15 -- common/autotest_common.sh@1689 -- # lcov --version 00:03:32.595 15:08:15 -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:03:32.595 15:08:15 -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:03:32.595 15:08:15 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:32.595 15:08:15 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:32.595 15:08:15 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:32.595 15:08:15 -- scripts/common.sh@336 -- # IFS=.-: 00:03:32.595 15:08:15 -- scripts/common.sh@336 -- # read -ra ver1 00:03:32.595 15:08:15 -- scripts/common.sh@337 -- # IFS=.-: 00:03:32.595 15:08:15 -- scripts/common.sh@337 -- # read -ra ver2 00:03:32.595 15:08:15 -- scripts/common.sh@338 -- # local 'op=<' 00:03:32.595 15:08:15 -- scripts/common.sh@340 -- # ver1_l=2 00:03:32.595 15:08:15 -- scripts/common.sh@341 -- # ver2_l=1 00:03:32.595 15:08:15 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:32.595 15:08:15 -- scripts/common.sh@344 -- # case "$op" in 00:03:32.595 15:08:15 -- scripts/common.sh@345 -- # : 1 00:03:32.595 15:08:15 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:32.595 15:08:15 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:32.595 15:08:15 -- scripts/common.sh@365 -- # decimal 1 00:03:32.595 15:08:15 -- scripts/common.sh@353 -- # local d=1 00:03:32.595 15:08:15 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:32.595 15:08:15 -- scripts/common.sh@355 -- # echo 1 00:03:32.595 15:08:15 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:32.595 15:08:15 -- scripts/common.sh@366 -- # decimal 2 00:03:32.596 15:08:15 -- scripts/common.sh@353 -- # local d=2 00:03:32.596 15:08:15 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:32.596 15:08:15 -- scripts/common.sh@355 -- # echo 2 00:03:32.596 15:08:15 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:32.596 15:08:15 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:32.596 15:08:15 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:32.596 15:08:15 -- scripts/common.sh@368 -- # return 0 00:03:32.596 15:08:15 -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:32.596 15:08:15 -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:03:32.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.596 --rc genhtml_branch_coverage=1 00:03:32.596 --rc genhtml_function_coverage=1 00:03:32.596 --rc genhtml_legend=1 00:03:32.596 --rc geninfo_all_blocks=1 00:03:32.596 --rc geninfo_unexecuted_blocks=1 00:03:32.596 00:03:32.596 ' 00:03:32.596 15:08:15 -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:03:32.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.596 --rc genhtml_branch_coverage=1 00:03:32.596 --rc genhtml_function_coverage=1 00:03:32.596 --rc genhtml_legend=1 00:03:32.596 --rc geninfo_all_blocks=1 00:03:32.596 --rc geninfo_unexecuted_blocks=1 00:03:32.596 00:03:32.596 ' 00:03:32.596 15:08:15 -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:03:32.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.596 --rc genhtml_branch_coverage=1 00:03:32.596 --rc genhtml_function_coverage=1 00:03:32.596 --rc genhtml_legend=1 00:03:32.596 --rc geninfo_all_blocks=1 00:03:32.596 --rc geninfo_unexecuted_blocks=1 00:03:32.596 00:03:32.596 ' 00:03:32.596 15:08:15 -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:03:32.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.596 --rc genhtml_branch_coverage=1 00:03:32.596 --rc genhtml_function_coverage=1 00:03:32.596 --rc genhtml_legend=1 00:03:32.596 --rc geninfo_all_blocks=1 00:03:32.596 --rc geninfo_unexecuted_blocks=1 00:03:32.596 00:03:32.596 ' 00:03:32.596 15:08:15 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:32.596 15:08:15 -- nvmf/common.sh@7 -- # uname -s 00:03:32.596 15:08:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:32.596 15:08:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:32.596 15:08:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:32.596 15:08:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:32.596 15:08:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:32.596 15:08:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:32.596 15:08:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:32.596 15:08:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:32.596 15:08:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:32.596 15:08:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:32.855 15:08:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04e3e96a-6339-4098-b753-e8ed47e36634 00:03:32.855 15:08:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=04e3e96a-6339-4098-b753-e8ed47e36634 00:03:32.855 15:08:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:32.855 15:08:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:32.855 15:08:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:32.855 15:08:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:32.855 15:08:15 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:32.855 15:08:15 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:32.855 15:08:15 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:32.855 15:08:15 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:32.855 15:08:15 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:32.855 15:08:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.855 15:08:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.855 15:08:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.855 15:08:15 -- paths/export.sh@5 -- # export PATH 00:03:32.855 15:08:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.855 15:08:15 -- nvmf/common.sh@51 -- # : 0 00:03:32.855 15:08:15 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:32.855 15:08:15 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:32.855 15:08:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:32.855 15:08:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:32.855 15:08:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:32.855 15:08:15 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:32.855 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:32.855 15:08:15 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:32.855 15:08:15 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:32.855 15:08:15 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:32.855 15:08:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:32.855 15:08:15 -- spdk/autotest.sh@32 -- # uname -s 00:03:32.855 15:08:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:32.855 15:08:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:32.855 15:08:15 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:32.855 15:08:15 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:32.855 15:08:15 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:32.855 15:08:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:32.855 15:08:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:32.855 15:08:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:32.855 15:08:15 -- spdk/autotest.sh@48 -- # udevadm_pid=54729 00:03:32.855 15:08:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:32.855 15:08:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:32.855 15:08:15 -- pm/common@17 -- # local monitor 00:03:32.855 15:08:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.855 15:08:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.855 15:08:15 -- pm/common@25 -- # sleep 1 00:03:32.855 15:08:15 -- pm/common@21 -- # date +%s 00:03:32.855 15:08:15 -- pm/common@21 -- # date +%s 00:03:32.855 15:08:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1729868895 00:03:32.855 15:08:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1729868895 00:03:32.855 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1729868895_collect-cpu-load.pm.log 00:03:32.855 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1729868895_collect-vmstat.pm.log 00:03:33.792 15:08:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:33.792 15:08:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:33.792 15:08:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:33.792 15:08:16 -- common/autotest_common.sh@10 -- # set +x 00:03:33.792 15:08:16 -- spdk/autotest.sh@59 -- # create_test_list 00:03:33.792 15:08:16 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:33.792 15:08:16 -- common/autotest_common.sh@10 -- # set +x 00:03:33.792 15:08:16 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:33.792 15:08:16 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:33.792 15:08:16 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:33.792 15:08:16 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:33.792 15:08:16 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:33.792 15:08:16 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:33.792 15:08:16 -- common/autotest_common.sh@1453 -- # uname 00:03:33.792 15:08:16 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:03:33.792 15:08:16 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:34.051 15:08:16 -- common/autotest_common.sh@1473 -- # uname 00:03:34.051 15:08:16 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:03:34.051 15:08:16 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:34.051 15:08:16 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:34.051 lcov: LCOV version 1.15 00:03:34.051 15:08:16 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:48.952 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:48.952 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:07.087 15:08:46 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:07.087 15:08:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:07.087 15:08:46 -- common/autotest_common.sh@10 -- # set +x 00:04:07.087 15:08:46 -- spdk/autotest.sh@78 -- # rm -f 00:04:07.087 15:08:46 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:07.087 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.087 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:07.087 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:07.087 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:07.087 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:07.087 15:08:48 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:07.087 15:08:48 -- common/autotest_common.sh@1653 -- # zoned_devs=() 00:04:07.087 15:08:48 -- common/autotest_common.sh@1653 -- # local -gA zoned_devs 00:04:07.087 15:08:48 -- common/autotest_common.sh@1654 -- # local nvme bdf 00:04:07.087 15:08:48 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:07.087 15:08:48 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme0n1 00:04:07.087 15:08:48 -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:04:07.087 15:08:48 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:07.087 15:08:48 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:07.087 15:08:48 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:07.087 15:08:48 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme1n1 00:04:07.087 15:08:48 -- common/autotest_common.sh@1646 -- # local device=nvme1n1 00:04:07.087 15:08:48 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:07.087 15:08:48 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:07.087 15:08:48 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:07.087 15:08:48 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n1 00:04:07.087 15:08:48 -- common/autotest_common.sh@1646 -- # local device=nvme2n1 00:04:07.087 15:08:48 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:07.087 15:08:48 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:07.087 15:08:48 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:07.087 15:08:48 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n2 00:04:07.087 15:08:48 -- common/autotest_common.sh@1646 -- # local device=nvme2n2 00:04:07.087 15:08:48 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:07.087 15:08:48 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:07.087 15:08:48 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:07.087 15:08:48 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n3 00:04:07.087 15:08:48 -- common/autotest_common.sh@1646 -- # local device=nvme2n3 00:04:07.087 15:08:48 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:07.087 15:08:48 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:07.087 15:08:48 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:07.087 15:08:48 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3c3n1 00:04:07.087 15:08:48 -- common/autotest_common.sh@1646 -- # local device=nvme3c3n1 00:04:07.087 15:08:48 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:07.087 15:08:48 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:07.087 15:08:48 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:04:07.087 15:08:48 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3n1 00:04:07.087 15:08:48 -- common/autotest_common.sh@1646 -- # local device=nvme3n1 00:04:07.087 15:08:48 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:07.087 15:08:48 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:04:07.087 15:08:48 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:07.087 15:08:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:07.087 15:08:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:07.087 15:08:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:07.087 15:08:48 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:07.087 15:08:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:07.087 No valid GPT data, bailing 00:04:07.087 15:08:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:07.087 15:08:48 -- scripts/common.sh@394 -- # pt= 00:04:07.087 15:08:48 -- scripts/common.sh@395 -- # return 1 00:04:07.087 15:08:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:07.087 1+0 records in 00:04:07.087 1+0 records out 00:04:07.087 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178142 s, 58.9 MB/s 00:04:07.087 15:08:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:07.087 15:08:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:07.087 15:08:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:07.087 15:08:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:07.087 15:08:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:07.088 No valid GPT data, bailing 00:04:07.088 15:08:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:07.088 15:08:48 -- scripts/common.sh@394 -- # pt= 00:04:07.088 15:08:48 -- scripts/common.sh@395 -- # return 1 00:04:07.088 15:08:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:07.088 1+0 records in 00:04:07.088 1+0 records out 00:04:07.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00535564 s, 196 MB/s 00:04:07.088 15:08:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:07.088 15:08:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:07.088 15:08:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:07.088 15:08:48 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:07.088 15:08:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:07.088 No valid GPT data, bailing 00:04:07.088 15:08:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:07.088 15:08:48 -- scripts/common.sh@394 -- # pt= 00:04:07.088 15:08:48 -- scripts/common.sh@395 -- # return 1 00:04:07.088 15:08:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:07.088 1+0 records in 00:04:07.088 1+0 records out 00:04:07.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442708 s, 237 MB/s 00:04:07.088 15:08:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:07.088 15:08:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:07.088 15:08:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:07.088 15:08:48 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:07.088 15:08:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:07.088 No valid GPT data, bailing 00:04:07.088 15:08:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:07.088 15:08:48 -- scripts/common.sh@394 -- # pt= 00:04:07.088 15:08:48 -- scripts/common.sh@395 -- # return 1 00:04:07.088 15:08:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:07.088 1+0 records in 00:04:07.088 1+0 records out 00:04:07.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00399912 s, 262 MB/s 00:04:07.088 15:08:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:07.088 15:08:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:07.088 15:08:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:07.088 15:08:48 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:07.088 15:08:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:07.088 No valid GPT data, bailing 00:04:07.088 15:08:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:07.088 15:08:48 -- scripts/common.sh@394 -- # pt= 00:04:07.088 15:08:48 -- scripts/common.sh@395 -- # return 1 00:04:07.088 15:08:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:07.088 1+0 records in 00:04:07.088 1+0 records out 00:04:07.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00637417 s, 165 MB/s 00:04:07.088 15:08:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:07.088 15:08:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:07.088 15:08:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:07.088 15:08:48 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:07.088 15:08:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:07.088 No valid GPT data, bailing 00:04:07.088 15:08:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:07.088 15:08:48 -- scripts/common.sh@394 -- # pt= 00:04:07.088 15:08:48 -- scripts/common.sh@395 -- # return 1 00:04:07.088 15:08:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:07.088 1+0 records in 00:04:07.088 1+0 records out 00:04:07.088 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0050942 s, 206 MB/s 00:04:07.088 15:08:48 -- spdk/autotest.sh@105 -- # sync 00:04:07.088 15:08:48 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:07.088 15:08:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:07.088 15:08:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:09.618 15:08:51 -- spdk/autotest.sh@111 -- # uname -s 00:04:09.618 15:08:51 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:09.618 15:08:51 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:09.618 15:08:51 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:09.877 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.445 Hugepages 00:04:10.445 node hugesize free / total 00:04:10.445 node0 1048576kB 0 / 0 00:04:10.445 node0 2048kB 0 / 0 00:04:10.445 00:04:10.445 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:10.704 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:10.704 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:10.962 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:10.962 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:10.962 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:10.962 15:08:53 -- spdk/autotest.sh@117 -- # uname -s 00:04:10.962 15:08:53 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:10.962 15:08:53 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:10.962 15:08:53 -- common/autotest_common.sh@1512 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.900 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.469 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.469 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.469 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.727 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.727 15:08:55 -- common/autotest_common.sh@1513 -- # sleep 1 00:04:13.664 15:08:56 -- common/autotest_common.sh@1514 -- # bdfs=() 00:04:13.664 15:08:56 -- common/autotest_common.sh@1514 -- # local bdfs 00:04:13.664 15:08:56 -- common/autotest_common.sh@1516 -- # bdfs=($(get_nvme_bdfs)) 00:04:13.664 15:08:56 -- common/autotest_common.sh@1516 -- # get_nvme_bdfs 00:04:13.664 15:08:56 -- common/autotest_common.sh@1494 -- # bdfs=() 00:04:13.664 15:08:56 -- common/autotest_common.sh@1494 -- # local bdfs 00:04:13.664 15:08:56 -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:13.664 15:08:56 -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:13.664 15:08:56 -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:04:13.922 15:08:56 -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:04:13.922 15:08:56 -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:13.922 15:08:56 -- common/autotest_common.sh@1518 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.490 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.748 Waiting for block devices as requested 00:04:14.748 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:15.007 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:15.007 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:15.007 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:20.313 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:20.313 15:09:02 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:04:20.313 15:09:02 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:20.313 15:09:02 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:20.313 15:09:02 -- common/autotest_common.sh@1483 -- # grep 0000:00:10.0/nvme/nvme 00:04:20.313 15:09:02 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:20.313 15:09:02 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:20.313 15:09:02 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:20.313 15:09:02 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme1 00:04:20.313 15:09:02 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme1 00:04:20.313 15:09:02 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme1 ]] 00:04:20.313 15:09:02 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:04:20.313 15:09:02 -- common/autotest_common.sh@1527 -- # grep oacs 00:04:20.313 15:09:02 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme1 00:04:20.313 15:09:02 -- common/autotest_common.sh@1527 -- # oacs=' 0x12a' 00:04:20.313 15:09:02 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:04:20.313 15:09:02 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:04:20.313 15:09:02 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme1 00:04:20.313 15:09:02 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:04:20.313 15:09:02 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:04:20.313 15:09:02 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:04:20.313 15:09:02 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:04:20.313 15:09:02 -- common/autotest_common.sh@1539 -- # continue 00:04:20.313 15:09:02 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:04:20.313 15:09:02 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:20.313 15:09:02 -- common/autotest_common.sh@1483 -- # grep 0000:00:11.0/nvme/nvme 00:04:20.313 15:09:02 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:20.313 15:09:02 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:20.313 15:09:02 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:20.313 15:09:02 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:20.313 15:09:02 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme0 00:04:20.313 15:09:02 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme0 00:04:20.313 15:09:02 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme0 ]] 00:04:20.313 15:09:02 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme0 00:04:20.313 15:09:02 -- common/autotest_common.sh@1527 -- # grep oacs 00:04:20.313 15:09:02 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:04:20.313 15:09:02 -- common/autotest_common.sh@1527 -- # oacs=' 0x12a' 00:04:20.313 15:09:02 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:04:20.313 15:09:02 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:04:20.313 15:09:02 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme0 00:04:20.313 15:09:02 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:04:20.313 15:09:02 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:04:20.313 15:09:02 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:04:20.313 15:09:02 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:04:20.313 15:09:02 -- common/autotest_common.sh@1539 -- # continue 00:04:20.313 15:09:02 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:04:20.313 15:09:02 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:20.313 15:09:02 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:20.313 15:09:02 -- common/autotest_common.sh@1483 -- # grep 0000:00:12.0/nvme/nvme 00:04:20.313 15:09:02 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:20.313 15:09:02 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:20.313 15:09:02 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:20.313 15:09:02 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme2 00:04:20.313 15:09:02 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme2 00:04:20.313 15:09:02 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme2 ]] 00:04:20.313 15:09:02 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme2 00:04:20.313 15:09:02 -- common/autotest_common.sh@1527 -- # grep oacs 00:04:20.313 15:09:02 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:04:20.313 15:09:02 -- common/autotest_common.sh@1527 -- # oacs=' 0x12a' 00:04:20.313 15:09:02 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:04:20.313 15:09:02 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:04:20.313 15:09:02 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme2 00:04:20.313 15:09:02 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:04:20.313 15:09:02 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:04:20.313 15:09:02 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:04:20.313 15:09:02 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:04:20.313 15:09:02 -- common/autotest_common.sh@1539 -- # continue 00:04:20.313 15:09:02 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:04:20.313 15:09:02 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:20.313 15:09:03 -- common/autotest_common.sh@1483 -- # grep 0000:00:13.0/nvme/nvme 00:04:20.313 15:09:03 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:20.313 15:09:03 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:20.313 15:09:03 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:20.313 15:09:03 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:20.313 15:09:03 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme3 00:04:20.313 15:09:03 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme3 00:04:20.313 15:09:03 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme3 ]] 00:04:20.313 15:09:03 -- common/autotest_common.sh@1527 -- # grep oacs 00:04:20.313 15:09:03 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme3 00:04:20.313 15:09:03 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:04:20.573 15:09:03 -- common/autotest_common.sh@1527 -- # oacs=' 0x12a' 00:04:20.573 15:09:03 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:04:20.573 15:09:03 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:04:20.573 15:09:03 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme3 00:04:20.573 15:09:03 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:04:20.573 15:09:03 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:04:20.573 15:09:03 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:04:20.573 15:09:03 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:04:20.573 15:09:03 -- common/autotest_common.sh@1539 -- # continue 00:04:20.573 15:09:03 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:20.573 15:09:03 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:20.573 15:09:03 -- common/autotest_common.sh@10 -- # set +x 00:04:20.573 15:09:03 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:20.573 15:09:03 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:20.573 15:09:03 -- common/autotest_common.sh@10 -- # set +x 00:04:20.573 15:09:03 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:21.166 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.105 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.105 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.105 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.105 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.105 15:09:04 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:22.105 15:09:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:22.105 15:09:04 -- common/autotest_common.sh@10 -- # set +x 00:04:22.364 15:09:04 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:22.364 15:09:04 -- common/autotest_common.sh@1574 -- # mapfile -t bdfs 00:04:22.364 15:09:04 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs_by_id 0x0a54 00:04:22.364 15:09:04 -- common/autotest_common.sh@1559 -- # bdfs=() 00:04:22.364 15:09:04 -- common/autotest_common.sh@1559 -- # _bdfs=() 00:04:22.364 15:09:04 -- common/autotest_common.sh@1559 -- # local bdfs _bdfs 00:04:22.364 15:09:04 -- common/autotest_common.sh@1560 -- # _bdfs=($(get_nvme_bdfs)) 00:04:22.364 15:09:04 -- common/autotest_common.sh@1560 -- # get_nvme_bdfs 00:04:22.364 15:09:04 -- common/autotest_common.sh@1494 -- # bdfs=() 00:04:22.364 15:09:04 -- common/autotest_common.sh@1494 -- # local bdfs 00:04:22.364 15:09:04 -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:22.364 15:09:04 -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:22.364 15:09:04 -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:04:22.364 15:09:04 -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:04:22.364 15:09:05 -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:22.364 15:09:05 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:04:22.364 15:09:05 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:22.364 15:09:05 -- common/autotest_common.sh@1562 -- # device=0x0010 00:04:22.364 15:09:05 -- common/autotest_common.sh@1563 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.364 15:09:05 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:04:22.364 15:09:05 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:22.364 15:09:05 -- common/autotest_common.sh@1562 -- # device=0x0010 00:04:22.364 15:09:05 -- common/autotest_common.sh@1563 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.364 15:09:05 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:04:22.364 15:09:05 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:22.364 15:09:05 -- common/autotest_common.sh@1562 -- # device=0x0010 00:04:22.364 15:09:05 -- common/autotest_common.sh@1563 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.364 15:09:05 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:04:22.364 15:09:05 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:22.364 15:09:05 -- common/autotest_common.sh@1562 -- # device=0x0010 00:04:22.364 15:09:05 -- common/autotest_common.sh@1563 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:22.364 15:09:05 -- common/autotest_common.sh@1568 -- # (( 0 > 0 )) 00:04:22.364 15:09:05 -- common/autotest_common.sh@1568 -- # return 0 00:04:22.364 15:09:05 -- common/autotest_common.sh@1575 -- # [[ -z '' ]] 00:04:22.364 15:09:05 -- common/autotest_common.sh@1576 -- # return 0 00:04:22.364 15:09:05 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:22.364 15:09:05 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:22.364 15:09:05 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:22.364 15:09:05 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:22.364 15:09:05 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:22.364 15:09:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.364 15:09:05 -- common/autotest_common.sh@10 -- # set +x 00:04:22.364 15:09:05 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:22.364 15:09:05 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:22.364 15:09:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.364 15:09:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.364 15:09:05 -- common/autotest_common.sh@10 -- # set +x 00:04:22.364 ************************************ 00:04:22.364 START TEST env 00:04:22.364 ************************************ 00:04:22.364 15:09:05 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:22.623 * Looking for test storage... 00:04:22.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:22.623 15:09:05 env -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:22.623 15:09:05 env -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:22.623 15:09:05 env -- common/autotest_common.sh@1689 -- # lcov --version 00:04:22.623 15:09:05 env -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:22.623 15:09:05 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.623 15:09:05 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.623 15:09:05 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.623 15:09:05 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.623 15:09:05 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.623 15:09:05 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.623 15:09:05 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.623 15:09:05 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.623 15:09:05 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.623 15:09:05 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.623 15:09:05 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.623 15:09:05 env -- scripts/common.sh@344 -- # case "$op" in 00:04:22.623 15:09:05 env -- scripts/common.sh@345 -- # : 1 00:04:22.623 15:09:05 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.623 15:09:05 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.623 15:09:05 env -- scripts/common.sh@365 -- # decimal 1 00:04:22.623 15:09:05 env -- scripts/common.sh@353 -- # local d=1 00:04:22.623 15:09:05 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.623 15:09:05 env -- scripts/common.sh@355 -- # echo 1 00:04:22.623 15:09:05 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.623 15:09:05 env -- scripts/common.sh@366 -- # decimal 2 00:04:22.623 15:09:05 env -- scripts/common.sh@353 -- # local d=2 00:04:22.623 15:09:05 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.624 15:09:05 env -- scripts/common.sh@355 -- # echo 2 00:04:22.624 15:09:05 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.624 15:09:05 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.624 15:09:05 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.624 15:09:05 env -- scripts/common.sh@368 -- # return 0 00:04:22.624 15:09:05 env -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.624 15:09:05 env -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:22.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.624 --rc genhtml_branch_coverage=1 00:04:22.624 --rc genhtml_function_coverage=1 00:04:22.624 --rc genhtml_legend=1 00:04:22.624 --rc geninfo_all_blocks=1 00:04:22.624 --rc geninfo_unexecuted_blocks=1 00:04:22.624 00:04:22.624 ' 00:04:22.624 15:09:05 env -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:22.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.624 --rc genhtml_branch_coverage=1 00:04:22.624 --rc genhtml_function_coverage=1 00:04:22.624 --rc genhtml_legend=1 00:04:22.624 --rc geninfo_all_blocks=1 00:04:22.624 --rc geninfo_unexecuted_blocks=1 00:04:22.624 00:04:22.624 ' 00:04:22.624 15:09:05 env -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:22.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.624 --rc genhtml_branch_coverage=1 00:04:22.624 --rc genhtml_function_coverage=1 00:04:22.624 --rc genhtml_legend=1 00:04:22.624 --rc geninfo_all_blocks=1 00:04:22.624 --rc geninfo_unexecuted_blocks=1 00:04:22.624 00:04:22.624 ' 00:04:22.624 15:09:05 env -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:22.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.624 --rc genhtml_branch_coverage=1 00:04:22.624 --rc genhtml_function_coverage=1 00:04:22.624 --rc genhtml_legend=1 00:04:22.624 --rc geninfo_all_blocks=1 00:04:22.624 --rc geninfo_unexecuted_blocks=1 00:04:22.624 00:04:22.624 ' 00:04:22.624 15:09:05 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:22.624 15:09:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.624 15:09:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.624 15:09:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.624 ************************************ 00:04:22.624 START TEST env_memory 00:04:22.624 ************************************ 00:04:22.624 15:09:05 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:22.624 00:04:22.624 00:04:22.624 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.624 http://cunit.sourceforge.net/ 00:04:22.624 00:04:22.624 00:04:22.624 Suite: memory 00:04:22.883 Test: alloc and free memory map ...[2024-10-25 15:09:05.386359] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:22.883 passed 00:04:22.883 Test: mem map translation ...[2024-10-25 15:09:05.431183] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:22.883 [2024-10-25 15:09:05.431337] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:22.883 [2024-10-25 15:09:05.431481] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:22.883 [2024-10-25 15:09:05.431546] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:22.883 passed 00:04:22.883 Test: mem map registration ...[2024-10-25 15:09:05.499682] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:22.883 [2024-10-25 15:09:05.499835] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:22.883 passed 00:04:22.883 Test: mem map adjacent registrations ...passed 00:04:22.883 00:04:22.883 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.883 suites 1 1 n/a 0 0 00:04:22.883 tests 4 4 4 0 0 00:04:22.883 asserts 152 152 152 0 n/a 00:04:22.883 00:04:22.883 Elapsed time = 0.243 seconds 00:04:23.143 00:04:23.143 real 0m0.300s 00:04:23.143 user 0m0.261s 00:04:23.143 sys 0m0.028s 00:04:23.143 ************************************ 00:04:23.143 END TEST env_memory 00:04:23.143 ************************************ 00:04:23.143 15:09:05 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.143 15:09:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:23.143 15:09:05 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:23.143 15:09:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.143 15:09:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.143 15:09:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:23.143 ************************************ 00:04:23.143 START TEST env_vtophys 00:04:23.143 ************************************ 00:04:23.143 15:09:05 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:23.143 EAL: lib.eal log level changed from notice to debug 00:04:23.143 EAL: Detected lcore 0 as core 0 on socket 0 00:04:23.143 EAL: Detected lcore 1 as core 0 on socket 0 00:04:23.143 EAL: Detected lcore 2 as core 0 on socket 0 00:04:23.143 EAL: Detected lcore 3 as core 0 on socket 0 00:04:23.143 EAL: Detected lcore 4 as core 0 on socket 0 00:04:23.143 EAL: Detected lcore 5 as core 0 on socket 0 00:04:23.143 EAL: Detected lcore 6 as core 0 on socket 0 00:04:23.143 EAL: Detected lcore 7 as core 0 on socket 0 00:04:23.143 EAL: Detected lcore 8 as core 0 on socket 0 00:04:23.143 EAL: Detected lcore 9 as core 0 on socket 0 00:04:23.143 EAL: Maximum logical cores by configuration: 128 00:04:23.143 EAL: Detected CPU lcores: 10 00:04:23.143 EAL: Detected NUMA nodes: 1 00:04:23.143 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:23.143 EAL: Detected shared linkage of DPDK 00:04:23.143 EAL: No shared files mode enabled, IPC will be disabled 00:04:23.143 EAL: Selected IOVA mode 'PA' 00:04:23.143 EAL: Probing VFIO support... 00:04:23.143 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:23.143 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:23.143 EAL: Ask a virtual area of 0x2e000 bytes 00:04:23.143 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:23.143 EAL: Setting up physically contiguous memory... 00:04:23.143 EAL: Setting maximum number of open files to 524288 00:04:23.143 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:23.143 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:23.143 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.143 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:23.143 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.143 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.143 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:23.143 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:23.143 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.143 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:23.143 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.143 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.143 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:23.143 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:23.143 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.143 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:23.143 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.143 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.143 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:23.143 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:23.143 EAL: Ask a virtual area of 0x61000 bytes 00:04:23.143 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:23.143 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:23.143 EAL: Ask a virtual area of 0x400000000 bytes 00:04:23.143 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:23.143 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:23.143 EAL: Hugepages will be freed exactly as allocated. 00:04:23.143 EAL: No shared files mode enabled, IPC is disabled 00:04:23.143 EAL: No shared files mode enabled, IPC is disabled 00:04:23.403 EAL: TSC frequency is ~2490000 KHz 00:04:23.403 EAL: Main lcore 0 is ready (tid=7fea0d9a3a40;cpuset=[0]) 00:04:23.403 EAL: Trying to obtain current memory policy. 00:04:23.403 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.403 EAL: Restoring previous memory policy: 0 00:04:23.403 EAL: request: mp_malloc_sync 00:04:23.403 EAL: No shared files mode enabled, IPC is disabled 00:04:23.403 EAL: Heap on socket 0 was expanded by 2MB 00:04:23.403 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:23.403 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:23.403 EAL: Mem event callback 'spdk:(nil)' registered 00:04:23.403 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:23.403 00:04:23.403 00:04:23.403 CUnit - A unit testing framework for C - Version 2.1-3 00:04:23.403 http://cunit.sourceforge.net/ 00:04:23.403 00:04:23.403 00:04:23.403 Suite: components_suite 00:04:23.662 Test: vtophys_malloc_test ...passed 00:04:23.662 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:23.662 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.662 EAL: Restoring previous memory policy: 4 00:04:23.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.662 EAL: request: mp_malloc_sync 00:04:23.662 EAL: No shared files mode enabled, IPC is disabled 00:04:23.662 EAL: Heap on socket 0 was expanded by 4MB 00:04:23.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.662 EAL: request: mp_malloc_sync 00:04:23.662 EAL: No shared files mode enabled, IPC is disabled 00:04:23.662 EAL: Heap on socket 0 was shrunk by 4MB 00:04:23.662 EAL: Trying to obtain current memory policy. 00:04:23.662 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.662 EAL: Restoring previous memory policy: 4 00:04:23.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.662 EAL: request: mp_malloc_sync 00:04:23.662 EAL: No shared files mode enabled, IPC is disabled 00:04:23.662 EAL: Heap on socket 0 was expanded by 6MB 00:04:23.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.662 EAL: request: mp_malloc_sync 00:04:23.662 EAL: No shared files mode enabled, IPC is disabled 00:04:23.662 EAL: Heap on socket 0 was shrunk by 6MB 00:04:23.662 EAL: Trying to obtain current memory policy. 00:04:23.662 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.662 EAL: Restoring previous memory policy: 4 00:04:23.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.662 EAL: request: mp_malloc_sync 00:04:23.662 EAL: No shared files mode enabled, IPC is disabled 00:04:23.662 EAL: Heap on socket 0 was expanded by 10MB 00:04:23.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.921 EAL: request: mp_malloc_sync 00:04:23.921 EAL: No shared files mode enabled, IPC is disabled 00:04:23.921 EAL: Heap on socket 0 was shrunk by 10MB 00:04:23.921 EAL: Trying to obtain current memory policy. 00:04:23.921 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.921 EAL: Restoring previous memory policy: 4 00:04:23.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.921 EAL: request: mp_malloc_sync 00:04:23.921 EAL: No shared files mode enabled, IPC is disabled 00:04:23.921 EAL: Heap on socket 0 was expanded by 18MB 00:04:23.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.921 EAL: request: mp_malloc_sync 00:04:23.921 EAL: No shared files mode enabled, IPC is disabled 00:04:23.921 EAL: Heap on socket 0 was shrunk by 18MB 00:04:23.921 EAL: Trying to obtain current memory policy. 00:04:23.921 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.921 EAL: Restoring previous memory policy: 4 00:04:23.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.921 EAL: request: mp_malloc_sync 00:04:23.921 EAL: No shared files mode enabled, IPC is disabled 00:04:23.921 EAL: Heap on socket 0 was expanded by 34MB 00:04:23.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.921 EAL: request: mp_malloc_sync 00:04:23.921 EAL: No shared files mode enabled, IPC is disabled 00:04:23.921 EAL: Heap on socket 0 was shrunk by 34MB 00:04:23.921 EAL: Trying to obtain current memory policy. 00:04:23.921 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.921 EAL: Restoring previous memory policy: 4 00:04:23.921 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.921 EAL: request: mp_malloc_sync 00:04:23.921 EAL: No shared files mode enabled, IPC is disabled 00:04:23.921 EAL: Heap on socket 0 was expanded by 66MB 00:04:24.180 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.180 EAL: request: mp_malloc_sync 00:04:24.180 EAL: No shared files mode enabled, IPC is disabled 00:04:24.180 EAL: Heap on socket 0 was shrunk by 66MB 00:04:24.180 EAL: Trying to obtain current memory policy. 00:04:24.180 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.180 EAL: Restoring previous memory policy: 4 00:04:24.180 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.180 EAL: request: mp_malloc_sync 00:04:24.180 EAL: No shared files mode enabled, IPC is disabled 00:04:24.180 EAL: Heap on socket 0 was expanded by 130MB 00:04:24.440 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.440 EAL: request: mp_malloc_sync 00:04:24.440 EAL: No shared files mode enabled, IPC is disabled 00:04:24.440 EAL: Heap on socket 0 was shrunk by 130MB 00:04:24.701 EAL: Trying to obtain current memory policy. 00:04:24.701 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:24.701 EAL: Restoring previous memory policy: 4 00:04:24.701 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.701 EAL: request: mp_malloc_sync 00:04:24.701 EAL: No shared files mode enabled, IPC is disabled 00:04:24.701 EAL: Heap on socket 0 was expanded by 258MB 00:04:25.271 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.271 EAL: request: mp_malloc_sync 00:04:25.271 EAL: No shared files mode enabled, IPC is disabled 00:04:25.271 EAL: Heap on socket 0 was shrunk by 258MB 00:04:25.840 EAL: Trying to obtain current memory policy. 00:04:25.840 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.840 EAL: Restoring previous memory policy: 4 00:04:25.840 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.840 EAL: request: mp_malloc_sync 00:04:25.840 EAL: No shared files mode enabled, IPC is disabled 00:04:25.840 EAL: Heap on socket 0 was expanded by 514MB 00:04:26.778 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.778 EAL: request: mp_malloc_sync 00:04:26.778 EAL: No shared files mode enabled, IPC is disabled 00:04:26.778 EAL: Heap on socket 0 was shrunk by 514MB 00:04:27.716 EAL: Trying to obtain current memory policy. 00:04:27.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.716 EAL: Restoring previous memory policy: 4 00:04:27.716 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.716 EAL: request: mp_malloc_sync 00:04:27.716 EAL: No shared files mode enabled, IPC is disabled 00:04:27.716 EAL: Heap on socket 0 was expanded by 1026MB 00:04:29.622 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.622 EAL: request: mp_malloc_sync 00:04:29.622 EAL: No shared files mode enabled, IPC is disabled 00:04:29.622 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:31.526 passed 00:04:31.526 00:04:31.526 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.526 suites 1 1 n/a 0 0 00:04:31.526 tests 2 2 2 0 0 00:04:31.526 asserts 5838 5838 5838 0 n/a 00:04:31.526 00:04:31.526 Elapsed time = 8.076 seconds 00:04:31.526 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.526 EAL: request: mp_malloc_sync 00:04:31.526 EAL: No shared files mode enabled, IPC is disabled 00:04:31.526 EAL: Heap on socket 0 was shrunk by 2MB 00:04:31.526 EAL: No shared files mode enabled, IPC is disabled 00:04:31.526 EAL: No shared files mode enabled, IPC is disabled 00:04:31.526 EAL: No shared files mode enabled, IPC is disabled 00:04:31.526 00:04:31.526 real 0m8.412s 00:04:31.526 user 0m7.405s 00:04:31.526 sys 0m0.846s 00:04:31.526 ************************************ 00:04:31.526 END TEST env_vtophys 00:04:31.526 ************************************ 00:04:31.526 15:09:14 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.526 15:09:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:31.526 15:09:14 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:31.526 15:09:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.526 15:09:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.526 15:09:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.526 ************************************ 00:04:31.526 START TEST env_pci 00:04:31.526 ************************************ 00:04:31.526 15:09:14 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:31.526 00:04:31.526 00:04:31.526 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.526 http://cunit.sourceforge.net/ 00:04:31.526 00:04:31.526 00:04:31.526 Suite: pci 00:04:31.526 Test: pci_hook ...[2024-10-25 15:09:14.220860] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57592 has claimed it 00:04:31.784 passed 00:04:31.784 00:04:31.784 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.784 suites 1 1 n/a 0 0 00:04:31.784 tests 1 1 1 0 0 00:04:31.784 asserts 25 25 25 0 n/a 00:04:31.784 00:04:31.784 Elapsed time = 0.014 seconds 00:04:31.784 EAL: Cannot find device (10000:00:01.0) 00:04:31.784 EAL: Failed to attach device on primary process 00:04:31.784 ************************************ 00:04:31.784 END TEST env_pci 00:04:31.784 ************************************ 00:04:31.784 00:04:31.784 real 0m0.124s 00:04:31.784 user 0m0.055s 00:04:31.784 sys 0m0.067s 00:04:31.784 15:09:14 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.784 15:09:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:31.784 15:09:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:31.784 15:09:14 env -- env/env.sh@15 -- # uname 00:04:31.784 15:09:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:31.784 15:09:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:31.784 15:09:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.784 15:09:14 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:31.784 15:09:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.784 15:09:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.784 ************************************ 00:04:31.784 START TEST env_dpdk_post_init 00:04:31.784 ************************************ 00:04:31.784 15:09:14 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:31.784 EAL: Detected CPU lcores: 10 00:04:31.784 EAL: Detected NUMA nodes: 1 00:04:31.784 EAL: Detected shared linkage of DPDK 00:04:31.784 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:31.784 EAL: Selected IOVA mode 'PA' 00:04:32.041 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.041 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:32.041 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:32.041 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:32.041 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:32.041 Starting DPDK initialization... 00:04:32.041 Starting SPDK post initialization... 00:04:32.041 SPDK NVMe probe 00:04:32.041 Attaching to 0000:00:10.0 00:04:32.041 Attaching to 0000:00:11.0 00:04:32.041 Attaching to 0000:00:12.0 00:04:32.041 Attaching to 0000:00:13.0 00:04:32.041 Attached to 0000:00:10.0 00:04:32.041 Attached to 0000:00:11.0 00:04:32.041 Attached to 0000:00:13.0 00:04:32.041 Attached to 0000:00:12.0 00:04:32.041 Cleaning up... 00:04:32.041 00:04:32.041 real 0m0.298s 00:04:32.041 user 0m0.099s 00:04:32.041 sys 0m0.104s 00:04:32.041 15:09:14 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.041 ************************************ 00:04:32.041 END TEST env_dpdk_post_init 00:04:32.041 ************************************ 00:04:32.041 15:09:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:32.041 15:09:14 env -- env/env.sh@26 -- # uname 00:04:32.041 15:09:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:32.041 15:09:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.041 15:09:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.041 15:09:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.041 15:09:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.041 ************************************ 00:04:32.041 START TEST env_mem_callbacks 00:04:32.041 ************************************ 00:04:32.041 15:09:14 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:32.299 EAL: Detected CPU lcores: 10 00:04:32.299 EAL: Detected NUMA nodes: 1 00:04:32.299 EAL: Detected shared linkage of DPDK 00:04:32.299 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.299 EAL: Selected IOVA mode 'PA' 00:04:32.299 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.299 00:04:32.299 00:04:32.299 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.299 http://cunit.sourceforge.net/ 00:04:32.299 00:04:32.299 00:04:32.299 Suite: memory 00:04:32.299 Test: test ... 00:04:32.299 register 0x200000200000 2097152 00:04:32.299 malloc 3145728 00:04:32.299 register 0x200000400000 4194304 00:04:32.299 buf 0x2000004fffc0 len 3145728 PASSED 00:04:32.299 malloc 64 00:04:32.299 buf 0x2000004ffec0 len 64 PASSED 00:04:32.299 malloc 4194304 00:04:32.299 register 0x200000800000 6291456 00:04:32.299 buf 0x2000009fffc0 len 4194304 PASSED 00:04:32.299 free 0x2000004fffc0 3145728 00:04:32.299 free 0x2000004ffec0 64 00:04:32.299 unregister 0x200000400000 4194304 PASSED 00:04:32.299 free 0x2000009fffc0 4194304 00:04:32.299 unregister 0x200000800000 6291456 PASSED 00:04:32.299 malloc 8388608 00:04:32.300 register 0x200000400000 10485760 00:04:32.300 buf 0x2000005fffc0 len 8388608 PASSED 00:04:32.300 free 0x2000005fffc0 8388608 00:04:32.300 unregister 0x200000400000 10485760 PASSED 00:04:32.300 passed 00:04:32.300 00:04:32.300 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.300 suites 1 1 n/a 0 0 00:04:32.300 tests 1 1 1 0 0 00:04:32.300 asserts 15 15 15 0 n/a 00:04:32.300 00:04:32.300 Elapsed time = 0.077 seconds 00:04:32.576 00:04:32.576 real 0m0.300s 00:04:32.576 user 0m0.110s 00:04:32.576 sys 0m0.084s 00:04:32.576 15:09:15 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.576 15:09:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:32.576 ************************************ 00:04:32.576 END TEST env_mem_callbacks 00:04:32.576 ************************************ 00:04:32.576 ************************************ 00:04:32.576 END TEST env 00:04:32.576 ************************************ 00:04:32.576 00:04:32.576 real 0m10.044s 00:04:32.576 user 0m8.189s 00:04:32.576 sys 0m1.481s 00:04:32.576 15:09:15 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:32.576 15:09:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.576 15:09:15 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:32.576 15:09:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.576 15:09:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.576 15:09:15 -- common/autotest_common.sh@10 -- # set +x 00:04:32.576 ************************************ 00:04:32.576 START TEST rpc 00:04:32.576 ************************************ 00:04:32.576 15:09:15 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:32.843 * Looking for test storage... 00:04:32.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:32.843 15:09:15 rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:32.843 15:09:15 rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:04:32.843 15:09:15 rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:32.843 15:09:15 rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:32.843 15:09:15 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.843 15:09:15 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.843 15:09:15 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.843 15:09:15 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.843 15:09:15 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.843 15:09:15 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.843 15:09:15 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.843 15:09:15 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.843 15:09:15 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.843 15:09:15 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.843 15:09:15 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.843 15:09:15 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:32.843 15:09:15 rpc -- scripts/common.sh@345 -- # : 1 00:04:32.843 15:09:15 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.843 15:09:15 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.843 15:09:15 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:32.843 15:09:15 rpc -- scripts/common.sh@353 -- # local d=1 00:04:32.843 15:09:15 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.843 15:09:15 rpc -- scripts/common.sh@355 -- # echo 1 00:04:32.843 15:09:15 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.843 15:09:15 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:32.843 15:09:15 rpc -- scripts/common.sh@353 -- # local d=2 00:04:32.843 15:09:15 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.843 15:09:15 rpc -- scripts/common.sh@355 -- # echo 2 00:04:32.843 15:09:15 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.844 15:09:15 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.844 15:09:15 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.844 15:09:15 rpc -- scripts/common.sh@368 -- # return 0 00:04:32.844 15:09:15 rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.844 15:09:15 rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:32.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.844 --rc genhtml_branch_coverage=1 00:04:32.844 --rc genhtml_function_coverage=1 00:04:32.844 --rc genhtml_legend=1 00:04:32.844 --rc geninfo_all_blocks=1 00:04:32.844 --rc geninfo_unexecuted_blocks=1 00:04:32.844 00:04:32.844 ' 00:04:32.844 15:09:15 rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:32.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.844 --rc genhtml_branch_coverage=1 00:04:32.844 --rc genhtml_function_coverage=1 00:04:32.844 --rc genhtml_legend=1 00:04:32.844 --rc geninfo_all_blocks=1 00:04:32.844 --rc geninfo_unexecuted_blocks=1 00:04:32.844 00:04:32.844 ' 00:04:32.844 15:09:15 rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:32.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.844 --rc genhtml_branch_coverage=1 00:04:32.844 --rc genhtml_function_coverage=1 00:04:32.844 --rc genhtml_legend=1 00:04:32.844 --rc geninfo_all_blocks=1 00:04:32.844 --rc geninfo_unexecuted_blocks=1 00:04:32.844 00:04:32.844 ' 00:04:32.844 15:09:15 rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:32.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.844 --rc genhtml_branch_coverage=1 00:04:32.844 --rc genhtml_function_coverage=1 00:04:32.844 --rc genhtml_legend=1 00:04:32.844 --rc geninfo_all_blocks=1 00:04:32.844 --rc geninfo_unexecuted_blocks=1 00:04:32.844 00:04:32.844 ' 00:04:32.844 15:09:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57727 00:04:32.844 15:09:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.844 15:09:15 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:32.844 15:09:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57727 00:04:32.844 15:09:15 rpc -- common/autotest_common.sh@831 -- # '[' -z 57727 ']' 00:04:32.844 15:09:15 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.844 15:09:15 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:32.844 15:09:15 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.844 15:09:15 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:32.844 15:09:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.844 [2024-10-25 15:09:15.558324] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:04:32.844 [2024-10-25 15:09:15.558855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57727 ] 00:04:33.103 [2024-10-25 15:09:15.746098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.363 [2024-10-25 15:09:15.858983] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:33.363 [2024-10-25 15:09:15.859249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57727' to capture a snapshot of events at runtime. 00:04:33.363 [2024-10-25 15:09:15.859407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:33.363 [2024-10-25 15:09:15.859465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:33.363 [2024-10-25 15:09:15.859495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57727 for offline analysis/debug. 00:04:33.363 [2024-10-25 15:09:15.860655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.300 15:09:16 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:34.300 15:09:16 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:34.300 15:09:16 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:34.300 15:09:16 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:34.300 15:09:16 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:34.300 15:09:16 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:34.300 15:09:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.300 15:09:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.300 15:09:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.300 ************************************ 00:04:34.300 START TEST rpc_integrity 00:04:34.300 ************************************ 00:04:34.300 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:34.300 15:09:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:34.300 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.300 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.300 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.300 15:09:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:34.300 15:09:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:34.300 15:09:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:34.300 15:09:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:34.300 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.300 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.300 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.300 15:09:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:34.300 15:09:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:34.300 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.300 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.300 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.300 15:09:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:34.300 { 00:04:34.300 "name": "Malloc0", 00:04:34.300 "aliases": [ 00:04:34.300 "53e795fc-754e-4d9c-a7f3-cf1270f005e3" 00:04:34.300 ], 00:04:34.300 "product_name": "Malloc disk", 00:04:34.300 "block_size": 512, 00:04:34.300 "num_blocks": 16384, 00:04:34.300 "uuid": "53e795fc-754e-4d9c-a7f3-cf1270f005e3", 00:04:34.300 "assigned_rate_limits": { 00:04:34.300 "rw_ios_per_sec": 0, 00:04:34.300 "rw_mbytes_per_sec": 0, 00:04:34.300 "r_mbytes_per_sec": 0, 00:04:34.300 "w_mbytes_per_sec": 0 00:04:34.300 }, 00:04:34.300 "claimed": false, 00:04:34.300 "zoned": false, 00:04:34.300 "supported_io_types": { 00:04:34.300 "read": true, 00:04:34.300 "write": true, 00:04:34.300 "unmap": true, 00:04:34.300 "flush": true, 00:04:34.300 "reset": true, 00:04:34.300 "nvme_admin": false, 00:04:34.300 "nvme_io": false, 00:04:34.300 "nvme_io_md": false, 00:04:34.300 "write_zeroes": true, 00:04:34.300 "zcopy": true, 00:04:34.300 "get_zone_info": false, 00:04:34.300 "zone_management": false, 00:04:34.300 "zone_append": false, 00:04:34.300 "compare": false, 00:04:34.300 "compare_and_write": false, 00:04:34.300 "abort": true, 00:04:34.300 "seek_hole": false, 00:04:34.300 "seek_data": false, 00:04:34.300 "copy": true, 00:04:34.300 "nvme_iov_md": false 00:04:34.300 }, 00:04:34.300 "memory_domains": [ 00:04:34.300 { 00:04:34.300 "dma_device_id": "system", 00:04:34.300 "dma_device_type": 1 00:04:34.300 }, 00:04:34.300 { 00:04:34.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.300 "dma_device_type": 2 00:04:34.300 } 00:04:34.300 ], 00:04:34.300 "driver_specific": {} 00:04:34.300 } 00:04:34.300 ]' 00:04:34.300 15:09:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:34.300 15:09:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:34.300 15:09:16 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:34.300 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.300 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.300 [2024-10-25 15:09:16.888403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:34.300 [2024-10-25 15:09:16.888479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:34.300 [2024-10-25 15:09:16.888519] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:34.300 [2024-10-25 15:09:16.888540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:34.300 [2024-10-25 15:09:16.891220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:34.300 [2024-10-25 15:09:16.891268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:34.300 Passthru0 00:04:34.301 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.301 15:09:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:34.301 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.301 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.301 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.301 15:09:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:34.301 { 00:04:34.301 "name": "Malloc0", 00:04:34.301 "aliases": [ 00:04:34.301 "53e795fc-754e-4d9c-a7f3-cf1270f005e3" 00:04:34.301 ], 00:04:34.301 "product_name": "Malloc disk", 00:04:34.301 "block_size": 512, 00:04:34.301 "num_blocks": 16384, 00:04:34.301 "uuid": "53e795fc-754e-4d9c-a7f3-cf1270f005e3", 00:04:34.301 "assigned_rate_limits": { 00:04:34.301 "rw_ios_per_sec": 0, 00:04:34.301 "rw_mbytes_per_sec": 0, 00:04:34.301 "r_mbytes_per_sec": 0, 00:04:34.301 "w_mbytes_per_sec": 0 00:04:34.301 }, 00:04:34.301 "claimed": true, 00:04:34.301 "claim_type": "exclusive_write", 00:04:34.301 "zoned": false, 00:04:34.301 "supported_io_types": { 00:04:34.301 "read": true, 00:04:34.301 "write": true, 00:04:34.301 "unmap": true, 00:04:34.301 "flush": true, 00:04:34.301 "reset": true, 00:04:34.301 "nvme_admin": false, 00:04:34.301 "nvme_io": false, 00:04:34.301 "nvme_io_md": false, 00:04:34.301 "write_zeroes": true, 00:04:34.301 "zcopy": true, 00:04:34.301 "get_zone_info": false, 00:04:34.301 "zone_management": false, 00:04:34.301 "zone_append": false, 00:04:34.301 "compare": false, 00:04:34.301 "compare_and_write": false, 00:04:34.301 "abort": true, 00:04:34.301 "seek_hole": false, 00:04:34.301 "seek_data": false, 00:04:34.301 "copy": true, 00:04:34.301 "nvme_iov_md": false 00:04:34.301 }, 00:04:34.301 "memory_domains": [ 00:04:34.301 { 00:04:34.301 "dma_device_id": "system", 00:04:34.301 "dma_device_type": 1 00:04:34.301 }, 00:04:34.301 { 00:04:34.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.301 "dma_device_type": 2 00:04:34.301 } 00:04:34.301 ], 00:04:34.301 "driver_specific": {} 00:04:34.301 }, 00:04:34.301 { 00:04:34.301 "name": "Passthru0", 00:04:34.301 "aliases": [ 00:04:34.301 "c4ecbfbd-f378-5ca8-b845-9959593e89ad" 00:04:34.301 ], 00:04:34.301 "product_name": "passthru", 00:04:34.301 "block_size": 512, 00:04:34.301 "num_blocks": 16384, 00:04:34.301 "uuid": "c4ecbfbd-f378-5ca8-b845-9959593e89ad", 00:04:34.301 "assigned_rate_limits": { 00:04:34.301 "rw_ios_per_sec": 0, 00:04:34.301 "rw_mbytes_per_sec": 0, 00:04:34.301 "r_mbytes_per_sec": 0, 00:04:34.301 "w_mbytes_per_sec": 0 00:04:34.301 }, 00:04:34.301 "claimed": false, 00:04:34.301 "zoned": false, 00:04:34.301 "supported_io_types": { 00:04:34.301 "read": true, 00:04:34.301 "write": true, 00:04:34.301 "unmap": true, 00:04:34.301 "flush": true, 00:04:34.301 "reset": true, 00:04:34.301 "nvme_admin": false, 00:04:34.301 "nvme_io": false, 00:04:34.301 "nvme_io_md": false, 00:04:34.301 "write_zeroes": true, 00:04:34.301 "zcopy": true, 00:04:34.301 "get_zone_info": false, 00:04:34.301 "zone_management": false, 00:04:34.301 "zone_append": false, 00:04:34.301 "compare": false, 00:04:34.301 "compare_and_write": false, 00:04:34.301 "abort": true, 00:04:34.301 "seek_hole": false, 00:04:34.301 "seek_data": false, 00:04:34.301 "copy": true, 00:04:34.301 "nvme_iov_md": false 00:04:34.301 }, 00:04:34.301 "memory_domains": [ 00:04:34.301 { 00:04:34.301 "dma_device_id": "system", 00:04:34.301 "dma_device_type": 1 00:04:34.301 }, 00:04:34.301 { 00:04:34.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.301 "dma_device_type": 2 00:04:34.301 } 00:04:34.301 ], 00:04:34.301 "driver_specific": { 00:04:34.301 "passthru": { 00:04:34.301 "name": "Passthru0", 00:04:34.301 "base_bdev_name": "Malloc0" 00:04:34.301 } 00:04:34.301 } 00:04:34.301 } 00:04:34.301 ]' 00:04:34.301 15:09:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:34.301 15:09:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:34.301 15:09:16 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:34.301 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.301 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.301 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.301 15:09:16 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:34.301 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.301 15:09:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.301 15:09:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.301 15:09:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:34.301 15:09:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.301 15:09:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.561 15:09:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.561 15:09:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:34.561 15:09:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:34.561 ************************************ 00:04:34.561 END TEST rpc_integrity 00:04:34.561 ************************************ 00:04:34.561 15:09:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.561 00:04:34.561 real 0m0.360s 00:04:34.561 user 0m0.196s 00:04:34.561 sys 0m0.061s 00:04:34.561 15:09:17 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.561 15:09:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.561 15:09:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:34.561 15:09:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.561 15:09:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.561 15:09:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.561 ************************************ 00:04:34.561 START TEST rpc_plugins 00:04:34.561 ************************************ 00:04:34.561 15:09:17 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:34.561 15:09:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:34.561 15:09:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.561 15:09:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.561 15:09:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.561 15:09:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:34.561 15:09:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:34.561 15:09:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.561 15:09:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.561 15:09:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.561 15:09:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:34.561 { 00:04:34.561 "name": "Malloc1", 00:04:34.561 "aliases": [ 00:04:34.561 "0cdbfd34-53df-420a-9450-2c2140551eca" 00:04:34.561 ], 00:04:34.561 "product_name": "Malloc disk", 00:04:34.561 "block_size": 4096, 00:04:34.561 "num_blocks": 256, 00:04:34.561 "uuid": "0cdbfd34-53df-420a-9450-2c2140551eca", 00:04:34.561 "assigned_rate_limits": { 00:04:34.561 "rw_ios_per_sec": 0, 00:04:34.561 "rw_mbytes_per_sec": 0, 00:04:34.561 "r_mbytes_per_sec": 0, 00:04:34.561 "w_mbytes_per_sec": 0 00:04:34.561 }, 00:04:34.561 "claimed": false, 00:04:34.561 "zoned": false, 00:04:34.561 "supported_io_types": { 00:04:34.561 "read": true, 00:04:34.561 "write": true, 00:04:34.561 "unmap": true, 00:04:34.561 "flush": true, 00:04:34.561 "reset": true, 00:04:34.561 "nvme_admin": false, 00:04:34.561 "nvme_io": false, 00:04:34.561 "nvme_io_md": false, 00:04:34.561 "write_zeroes": true, 00:04:34.561 "zcopy": true, 00:04:34.561 "get_zone_info": false, 00:04:34.561 "zone_management": false, 00:04:34.561 "zone_append": false, 00:04:34.561 "compare": false, 00:04:34.561 "compare_and_write": false, 00:04:34.561 "abort": true, 00:04:34.561 "seek_hole": false, 00:04:34.561 "seek_data": false, 00:04:34.561 "copy": true, 00:04:34.561 "nvme_iov_md": false 00:04:34.561 }, 00:04:34.561 "memory_domains": [ 00:04:34.561 { 00:04:34.561 "dma_device_id": "system", 00:04:34.561 "dma_device_type": 1 00:04:34.561 }, 00:04:34.561 { 00:04:34.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.561 "dma_device_type": 2 00:04:34.561 } 00:04:34.561 ], 00:04:34.561 "driver_specific": {} 00:04:34.561 } 00:04:34.561 ]' 00:04:34.561 15:09:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:34.561 15:09:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:34.561 15:09:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:34.561 15:09:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.561 15:09:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.561 15:09:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.561 15:09:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:34.561 15:09:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.561 15:09:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.561 15:09:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.561 15:09:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:34.561 15:09:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:34.820 ************************************ 00:04:34.820 END TEST rpc_plugins 00:04:34.820 ************************************ 00:04:34.820 15:09:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:34.820 00:04:34.820 real 0m0.155s 00:04:34.820 user 0m0.081s 00:04:34.820 sys 0m0.029s 00:04:34.820 15:09:17 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.820 15:09:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.820 15:09:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:34.820 15:09:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.820 15:09:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.820 15:09:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.820 ************************************ 00:04:34.820 START TEST rpc_trace_cmd_test 00:04:34.820 ************************************ 00:04:34.820 15:09:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:34.820 15:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:34.820 15:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:34.820 15:09:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:34.820 15:09:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:34.820 15:09:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:34.820 15:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:34.820 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57727", 00:04:34.820 "tpoint_group_mask": "0x8", 00:04:34.820 "iscsi_conn": { 00:04:34.820 "mask": "0x2", 00:04:34.820 "tpoint_mask": "0x0" 00:04:34.820 }, 00:04:34.820 "scsi": { 00:04:34.820 "mask": "0x4", 00:04:34.820 "tpoint_mask": "0x0" 00:04:34.820 }, 00:04:34.820 "bdev": { 00:04:34.820 "mask": "0x8", 00:04:34.820 "tpoint_mask": "0xffffffffffffffff" 00:04:34.820 }, 00:04:34.820 "nvmf_rdma": { 00:04:34.820 "mask": "0x10", 00:04:34.820 "tpoint_mask": "0x0" 00:04:34.820 }, 00:04:34.820 "nvmf_tcp": { 00:04:34.820 "mask": "0x20", 00:04:34.820 "tpoint_mask": "0x0" 00:04:34.820 }, 00:04:34.821 "ftl": { 00:04:34.821 "mask": "0x40", 00:04:34.821 "tpoint_mask": "0x0" 00:04:34.821 }, 00:04:34.821 "blobfs": { 00:04:34.821 "mask": "0x80", 00:04:34.821 "tpoint_mask": "0x0" 00:04:34.821 }, 00:04:34.821 "dsa": { 00:04:34.821 "mask": "0x200", 00:04:34.821 "tpoint_mask": "0x0" 00:04:34.821 }, 00:04:34.821 "thread": { 00:04:34.821 "mask": "0x400", 00:04:34.821 "tpoint_mask": "0x0" 00:04:34.821 }, 00:04:34.821 "nvme_pcie": { 00:04:34.821 "mask": "0x800", 00:04:34.821 "tpoint_mask": "0x0" 00:04:34.821 }, 00:04:34.821 "iaa": { 00:04:34.821 "mask": "0x1000", 00:04:34.821 "tpoint_mask": "0x0" 00:04:34.821 }, 00:04:34.821 "nvme_tcp": { 00:04:34.821 "mask": "0x2000", 00:04:34.821 "tpoint_mask": "0x0" 00:04:34.821 }, 00:04:34.821 "bdev_nvme": { 00:04:34.821 "mask": "0x4000", 00:04:34.821 "tpoint_mask": "0x0" 00:04:34.821 }, 00:04:34.821 "sock": { 00:04:34.821 "mask": "0x8000", 00:04:34.821 "tpoint_mask": "0x0" 00:04:34.821 }, 00:04:34.821 "blob": { 00:04:34.821 "mask": "0x10000", 00:04:34.821 "tpoint_mask": "0x0" 00:04:34.821 }, 00:04:34.821 "bdev_raid": { 00:04:34.821 "mask": "0x20000", 00:04:34.821 "tpoint_mask": "0x0" 00:04:34.821 }, 00:04:34.821 "scheduler": { 00:04:34.821 "mask": "0x40000", 00:04:34.821 "tpoint_mask": "0x0" 00:04:34.821 } 00:04:34.821 }' 00:04:34.821 15:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:34.821 15:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:34.821 15:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:34.821 15:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:34.821 15:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:34.821 15:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:34.821 15:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:35.080 15:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:35.080 15:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:35.080 ************************************ 00:04:35.080 END TEST rpc_trace_cmd_test 00:04:35.080 ************************************ 00:04:35.080 15:09:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:35.080 00:04:35.080 real 0m0.239s 00:04:35.080 user 0m0.177s 00:04:35.080 sys 0m0.055s 00:04:35.080 15:09:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.080 15:09:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:35.080 15:09:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:35.080 15:09:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:35.080 15:09:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:35.080 15:09:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.080 15:09:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.080 15:09:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.080 ************************************ 00:04:35.080 START TEST rpc_daemon_integrity 00:04:35.080 ************************************ 00:04:35.080 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:35.080 15:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:35.080 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.080 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.080 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.080 15:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:35.080 15:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:35.080 15:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:35.080 15:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:35.080 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.080 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.080 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.080 15:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:35.080 15:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:35.080 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.080 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.080 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.080 15:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:35.080 { 00:04:35.080 "name": "Malloc2", 00:04:35.080 "aliases": [ 00:04:35.080 "27681be5-299c-4f90-89ff-54ad04312b59" 00:04:35.080 ], 00:04:35.080 "product_name": "Malloc disk", 00:04:35.080 "block_size": 512, 00:04:35.080 "num_blocks": 16384, 00:04:35.080 "uuid": "27681be5-299c-4f90-89ff-54ad04312b59", 00:04:35.080 "assigned_rate_limits": { 00:04:35.080 "rw_ios_per_sec": 0, 00:04:35.080 "rw_mbytes_per_sec": 0, 00:04:35.080 "r_mbytes_per_sec": 0, 00:04:35.080 "w_mbytes_per_sec": 0 00:04:35.080 }, 00:04:35.080 "claimed": false, 00:04:35.080 "zoned": false, 00:04:35.080 "supported_io_types": { 00:04:35.080 "read": true, 00:04:35.080 "write": true, 00:04:35.080 "unmap": true, 00:04:35.081 "flush": true, 00:04:35.081 "reset": true, 00:04:35.081 "nvme_admin": false, 00:04:35.081 "nvme_io": false, 00:04:35.081 "nvme_io_md": false, 00:04:35.081 "write_zeroes": true, 00:04:35.081 "zcopy": true, 00:04:35.081 "get_zone_info": false, 00:04:35.081 "zone_management": false, 00:04:35.081 "zone_append": false, 00:04:35.081 "compare": false, 00:04:35.081 "compare_and_write": false, 00:04:35.081 "abort": true, 00:04:35.081 "seek_hole": false, 00:04:35.081 "seek_data": false, 00:04:35.081 "copy": true, 00:04:35.081 "nvme_iov_md": false 00:04:35.081 }, 00:04:35.081 "memory_domains": [ 00:04:35.081 { 00:04:35.081 "dma_device_id": "system", 00:04:35.081 "dma_device_type": 1 00:04:35.081 }, 00:04:35.081 { 00:04:35.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.081 "dma_device_type": 2 00:04:35.081 } 00:04:35.081 ], 00:04:35.081 "driver_specific": {} 00:04:35.081 } 00:04:35.081 ]' 00:04:35.081 15:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.340 [2024-10-25 15:09:17.824751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:35.340 [2024-10-25 15:09:17.824825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:35.340 [2024-10-25 15:09:17.824850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:35.340 [2024-10-25 15:09:17.824865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:35.340 [2024-10-25 15:09:17.827534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:35.340 [2024-10-25 15:09:17.827580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:35.340 Passthru0 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:35.340 { 00:04:35.340 "name": "Malloc2", 00:04:35.340 "aliases": [ 00:04:35.340 "27681be5-299c-4f90-89ff-54ad04312b59" 00:04:35.340 ], 00:04:35.340 "product_name": "Malloc disk", 00:04:35.340 "block_size": 512, 00:04:35.340 "num_blocks": 16384, 00:04:35.340 "uuid": "27681be5-299c-4f90-89ff-54ad04312b59", 00:04:35.340 "assigned_rate_limits": { 00:04:35.340 "rw_ios_per_sec": 0, 00:04:35.340 "rw_mbytes_per_sec": 0, 00:04:35.340 "r_mbytes_per_sec": 0, 00:04:35.340 "w_mbytes_per_sec": 0 00:04:35.340 }, 00:04:35.340 "claimed": true, 00:04:35.340 "claim_type": "exclusive_write", 00:04:35.340 "zoned": false, 00:04:35.340 "supported_io_types": { 00:04:35.340 "read": true, 00:04:35.340 "write": true, 00:04:35.340 "unmap": true, 00:04:35.340 "flush": true, 00:04:35.340 "reset": true, 00:04:35.340 "nvme_admin": false, 00:04:35.340 "nvme_io": false, 00:04:35.340 "nvme_io_md": false, 00:04:35.340 "write_zeroes": true, 00:04:35.340 "zcopy": true, 00:04:35.340 "get_zone_info": false, 00:04:35.340 "zone_management": false, 00:04:35.340 "zone_append": false, 00:04:35.340 "compare": false, 00:04:35.340 "compare_and_write": false, 00:04:35.340 "abort": true, 00:04:35.340 "seek_hole": false, 00:04:35.340 "seek_data": false, 00:04:35.340 "copy": true, 00:04:35.340 "nvme_iov_md": false 00:04:35.340 }, 00:04:35.340 "memory_domains": [ 00:04:35.340 { 00:04:35.340 "dma_device_id": "system", 00:04:35.340 "dma_device_type": 1 00:04:35.340 }, 00:04:35.340 { 00:04:35.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.340 "dma_device_type": 2 00:04:35.340 } 00:04:35.340 ], 00:04:35.340 "driver_specific": {} 00:04:35.340 }, 00:04:35.340 { 00:04:35.340 "name": "Passthru0", 00:04:35.340 "aliases": [ 00:04:35.340 "efb4bd8e-a72d-5703-b6e7-6d2a0ad196ef" 00:04:35.340 ], 00:04:35.340 "product_name": "passthru", 00:04:35.340 "block_size": 512, 00:04:35.340 "num_blocks": 16384, 00:04:35.340 "uuid": "efb4bd8e-a72d-5703-b6e7-6d2a0ad196ef", 00:04:35.340 "assigned_rate_limits": { 00:04:35.340 "rw_ios_per_sec": 0, 00:04:35.340 "rw_mbytes_per_sec": 0, 00:04:35.340 "r_mbytes_per_sec": 0, 00:04:35.340 "w_mbytes_per_sec": 0 00:04:35.340 }, 00:04:35.340 "claimed": false, 00:04:35.340 "zoned": false, 00:04:35.340 "supported_io_types": { 00:04:35.340 "read": true, 00:04:35.340 "write": true, 00:04:35.340 "unmap": true, 00:04:35.340 "flush": true, 00:04:35.340 "reset": true, 00:04:35.340 "nvme_admin": false, 00:04:35.340 "nvme_io": false, 00:04:35.340 "nvme_io_md": false, 00:04:35.340 "write_zeroes": true, 00:04:35.340 "zcopy": true, 00:04:35.340 "get_zone_info": false, 00:04:35.340 "zone_management": false, 00:04:35.340 "zone_append": false, 00:04:35.340 "compare": false, 00:04:35.340 "compare_and_write": false, 00:04:35.340 "abort": true, 00:04:35.340 "seek_hole": false, 00:04:35.340 "seek_data": false, 00:04:35.340 "copy": true, 00:04:35.340 "nvme_iov_md": false 00:04:35.340 }, 00:04:35.340 "memory_domains": [ 00:04:35.340 { 00:04:35.340 "dma_device_id": "system", 00:04:35.340 "dma_device_type": 1 00:04:35.340 }, 00:04:35.340 { 00:04:35.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.340 "dma_device_type": 2 00:04:35.340 } 00:04:35.340 ], 00:04:35.340 "driver_specific": { 00:04:35.340 "passthru": { 00:04:35.340 "name": "Passthru0", 00:04:35.340 "base_bdev_name": "Malloc2" 00:04:35.340 } 00:04:35.340 } 00:04:35.340 } 00:04:35.340 ]' 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.340 15:09:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:35.340 15:09:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:35.340 00:04:35.340 real 0m0.346s 00:04:35.340 user 0m0.177s 00:04:35.340 sys 0m0.069s 00:04:35.340 15:09:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.340 15:09:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.340 ************************************ 00:04:35.340 END TEST rpc_daemon_integrity 00:04:35.340 ************************************ 00:04:35.599 15:09:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:35.599 15:09:18 rpc -- rpc/rpc.sh@84 -- # killprocess 57727 00:04:35.599 15:09:18 rpc -- common/autotest_common.sh@950 -- # '[' -z 57727 ']' 00:04:35.599 15:09:18 rpc -- common/autotest_common.sh@954 -- # kill -0 57727 00:04:35.599 15:09:18 rpc -- common/autotest_common.sh@955 -- # uname 00:04:35.599 15:09:18 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:35.599 15:09:18 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57727 00:04:35.599 killing process with pid 57727 00:04:35.599 15:09:18 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:35.599 15:09:18 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:35.599 15:09:18 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57727' 00:04:35.599 15:09:18 rpc -- common/autotest_common.sh@969 -- # kill 57727 00:04:35.599 15:09:18 rpc -- common/autotest_common.sh@974 -- # wait 57727 00:04:38.133 00:04:38.133 real 0m5.358s 00:04:38.133 user 0m5.776s 00:04:38.133 sys 0m1.044s 00:04:38.133 15:09:20 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.133 ************************************ 00:04:38.133 END TEST rpc 00:04:38.133 ************************************ 00:04:38.133 15:09:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.133 15:09:20 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:38.133 15:09:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.133 15:09:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.133 15:09:20 -- common/autotest_common.sh@10 -- # set +x 00:04:38.133 ************************************ 00:04:38.133 START TEST skip_rpc 00:04:38.133 ************************************ 00:04:38.133 15:09:20 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:38.133 * Looking for test storage... 00:04:38.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:38.133 15:09:20 skip_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:04:38.133 15:09:20 skip_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:04:38.133 15:09:20 skip_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:04:38.133 15:09:20 skip_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.133 15:09:20 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:38.133 15:09:20 skip_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.133 15:09:20 skip_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:04:38.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.133 --rc genhtml_branch_coverage=1 00:04:38.133 --rc genhtml_function_coverage=1 00:04:38.133 --rc genhtml_legend=1 00:04:38.133 --rc geninfo_all_blocks=1 00:04:38.133 --rc geninfo_unexecuted_blocks=1 00:04:38.133 00:04:38.133 ' 00:04:38.133 15:09:20 skip_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:04:38.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.133 --rc genhtml_branch_coverage=1 00:04:38.133 --rc genhtml_function_coverage=1 00:04:38.133 --rc genhtml_legend=1 00:04:38.133 --rc geninfo_all_blocks=1 00:04:38.133 --rc geninfo_unexecuted_blocks=1 00:04:38.133 00:04:38.133 ' 00:04:38.133 15:09:20 skip_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:04:38.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.133 --rc genhtml_branch_coverage=1 00:04:38.133 --rc genhtml_function_coverage=1 00:04:38.133 --rc genhtml_legend=1 00:04:38.133 --rc geninfo_all_blocks=1 00:04:38.133 --rc geninfo_unexecuted_blocks=1 00:04:38.133 00:04:38.133 ' 00:04:38.133 15:09:20 skip_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:04:38.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.133 --rc genhtml_branch_coverage=1 00:04:38.133 --rc genhtml_function_coverage=1 00:04:38.133 --rc genhtml_legend=1 00:04:38.133 --rc geninfo_all_blocks=1 00:04:38.133 --rc geninfo_unexecuted_blocks=1 00:04:38.133 00:04:38.133 ' 00:04:38.133 15:09:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:38.133 15:09:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:38.133 15:09:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:38.133 15:09:20 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.133 15:09:20 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.133 15:09:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.391 ************************************ 00:04:38.391 START TEST skip_rpc 00:04:38.391 ************************************ 00:04:38.391 15:09:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:38.391 15:09:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:38.391 15:09:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57956 00:04:38.391 15:09:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.391 15:09:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:38.391 [2024-10-25 15:09:20.988658] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:04:38.391 [2024-10-25 15:09:20.989023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57956 ] 00:04:38.649 [2024-10-25 15:09:21.177088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.649 [2024-10-25 15:09:21.289984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.915 15:09:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:43.915 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:43.915 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:43.915 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:43.915 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.915 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:43.915 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.915 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:43.915 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.915 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.915 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:43.915 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:43.915 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:43.915 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:43.915 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:43.916 15:09:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:43.916 15:09:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57956 00:04:43.916 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57956 ']' 00:04:43.916 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57956 00:04:43.916 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:43.916 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:43.916 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57956 00:04:43.916 killing process with pid 57956 00:04:43.916 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:43.916 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:43.916 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57956' 00:04:43.916 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57956 00:04:43.916 15:09:25 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57956 00:04:45.821 00:04:45.821 real 0m7.450s 00:04:45.821 user 0m6.956s 00:04:45.821 sys 0m0.416s 00:04:45.821 ************************************ 00:04:45.821 END TEST skip_rpc 00:04:45.821 ************************************ 00:04:45.821 15:09:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.821 15:09:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.822 15:09:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:45.822 15:09:28 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.822 15:09:28 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.822 15:09:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.822 ************************************ 00:04:45.822 START TEST skip_rpc_with_json 00:04:45.822 ************************************ 00:04:45.822 15:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:45.822 15:09:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:45.822 15:09:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58060 00:04:45.822 15:09:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.822 15:09:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:45.822 15:09:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58060 00:04:45.822 15:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 58060 ']' 00:04:45.822 15:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.822 15:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:45.822 15:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.822 15:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:45.822 15:09:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.822 [2024-10-25 15:09:28.493322] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:04:45.822 [2024-10-25 15:09:28.495591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58060 ] 00:04:46.080 [2024-10-25 15:09:28.678098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.080 [2024-10-25 15:09:28.790944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.016 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:47.016 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:47.016 15:09:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:47.016 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.016 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.016 [2024-10-25 15:09:29.643006] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:47.016 request: 00:04:47.016 { 00:04:47.016 "trtype": "tcp", 00:04:47.016 "method": "nvmf_get_transports", 00:04:47.016 "req_id": 1 00:04:47.016 } 00:04:47.016 Got JSON-RPC error response 00:04:47.016 response: 00:04:47.016 { 00:04:47.016 "code": -19, 00:04:47.016 "message": "No such device" 00:04:47.016 } 00:04:47.016 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:47.016 15:09:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:47.016 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.016 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.016 [2024-10-25 15:09:29.659105] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.016 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.016 15:09:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:47.016 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:47.016 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.275 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:47.275 15:09:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:47.275 { 00:04:47.275 "subsystems": [ 00:04:47.275 { 00:04:47.275 "subsystem": "fsdev", 00:04:47.275 "config": [ 00:04:47.275 { 00:04:47.275 "method": "fsdev_set_opts", 00:04:47.275 "params": { 00:04:47.275 "fsdev_io_pool_size": 65535, 00:04:47.275 "fsdev_io_cache_size": 256 00:04:47.275 } 00:04:47.275 } 00:04:47.275 ] 00:04:47.275 }, 00:04:47.275 { 00:04:47.275 "subsystem": "keyring", 00:04:47.275 "config": [] 00:04:47.275 }, 00:04:47.275 { 00:04:47.275 "subsystem": "iobuf", 00:04:47.275 "config": [ 00:04:47.275 { 00:04:47.275 "method": "iobuf_set_options", 00:04:47.275 "params": { 00:04:47.275 "small_pool_count": 8192, 00:04:47.275 "large_pool_count": 1024, 00:04:47.275 "small_bufsize": 8192, 00:04:47.275 "large_bufsize": 135168, 00:04:47.275 "enable_numa": false 00:04:47.275 } 00:04:47.275 } 00:04:47.275 ] 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "subsystem": "sock", 00:04:47.276 "config": [ 00:04:47.276 { 00:04:47.276 "method": "sock_set_default_impl", 00:04:47.276 "params": { 00:04:47.276 "impl_name": "posix" 00:04:47.276 } 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "method": "sock_impl_set_options", 00:04:47.276 "params": { 00:04:47.276 "impl_name": "ssl", 00:04:47.276 "recv_buf_size": 4096, 00:04:47.276 "send_buf_size": 4096, 00:04:47.276 "enable_recv_pipe": true, 00:04:47.276 "enable_quickack": false, 00:04:47.276 "enable_placement_id": 0, 00:04:47.276 "enable_zerocopy_send_server": true, 00:04:47.276 "enable_zerocopy_send_client": false, 00:04:47.276 "zerocopy_threshold": 0, 00:04:47.276 "tls_version": 0, 00:04:47.276 "enable_ktls": false 00:04:47.276 } 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "method": "sock_impl_set_options", 00:04:47.276 "params": { 00:04:47.276 "impl_name": "posix", 00:04:47.276 "recv_buf_size": 2097152, 00:04:47.276 "send_buf_size": 2097152, 00:04:47.276 "enable_recv_pipe": true, 00:04:47.276 "enable_quickack": false, 00:04:47.276 "enable_placement_id": 0, 00:04:47.276 "enable_zerocopy_send_server": true, 00:04:47.276 "enable_zerocopy_send_client": false, 00:04:47.276 "zerocopy_threshold": 0, 00:04:47.276 "tls_version": 0, 00:04:47.276 "enable_ktls": false 00:04:47.276 } 00:04:47.276 } 00:04:47.276 ] 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "subsystem": "vmd", 00:04:47.276 "config": [] 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "subsystem": "accel", 00:04:47.276 "config": [ 00:04:47.276 { 00:04:47.276 "method": "accel_set_options", 00:04:47.276 "params": { 00:04:47.276 "small_cache_size": 128, 00:04:47.276 "large_cache_size": 16, 00:04:47.276 "task_count": 2048, 00:04:47.276 "sequence_count": 2048, 00:04:47.276 "buf_count": 2048 00:04:47.276 } 00:04:47.276 } 00:04:47.276 ] 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "subsystem": "bdev", 00:04:47.276 "config": [ 00:04:47.276 { 00:04:47.276 "method": "bdev_set_options", 00:04:47.276 "params": { 00:04:47.276 "bdev_io_pool_size": 65535, 00:04:47.276 "bdev_io_cache_size": 256, 00:04:47.276 "bdev_auto_examine": true, 00:04:47.276 "iobuf_small_cache_size": 128, 00:04:47.276 "iobuf_large_cache_size": 16 00:04:47.276 } 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "method": "bdev_raid_set_options", 00:04:47.276 "params": { 00:04:47.276 "process_window_size_kb": 1024, 00:04:47.276 "process_max_bandwidth_mb_sec": 0 00:04:47.276 } 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "method": "bdev_iscsi_set_options", 00:04:47.276 "params": { 00:04:47.276 "timeout_sec": 30 00:04:47.276 } 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "method": "bdev_nvme_set_options", 00:04:47.276 "params": { 00:04:47.276 "action_on_timeout": "none", 00:04:47.276 "timeout_us": 0, 00:04:47.276 "timeout_admin_us": 0, 00:04:47.276 "keep_alive_timeout_ms": 10000, 00:04:47.276 "arbitration_burst": 0, 00:04:47.276 "low_priority_weight": 0, 00:04:47.276 "medium_priority_weight": 0, 00:04:47.276 "high_priority_weight": 0, 00:04:47.276 "nvme_adminq_poll_period_us": 10000, 00:04:47.276 "nvme_ioq_poll_period_us": 0, 00:04:47.276 "io_queue_requests": 0, 00:04:47.276 "delay_cmd_submit": true, 00:04:47.276 "transport_retry_count": 4, 00:04:47.276 "bdev_retry_count": 3, 00:04:47.276 "transport_ack_timeout": 0, 00:04:47.276 "ctrlr_loss_timeout_sec": 0, 00:04:47.276 "reconnect_delay_sec": 0, 00:04:47.276 "fast_io_fail_timeout_sec": 0, 00:04:47.276 "disable_auto_failback": false, 00:04:47.276 "generate_uuids": false, 00:04:47.276 "transport_tos": 0, 00:04:47.276 "nvme_error_stat": false, 00:04:47.276 "rdma_srq_size": 0, 00:04:47.276 "io_path_stat": false, 00:04:47.276 "allow_accel_sequence": false, 00:04:47.276 "rdma_max_cq_size": 0, 00:04:47.276 "rdma_cm_event_timeout_ms": 0, 00:04:47.276 "dhchap_digests": [ 00:04:47.276 "sha256", 00:04:47.276 "sha384", 00:04:47.276 "sha512" 00:04:47.276 ], 00:04:47.276 "dhchap_dhgroups": [ 00:04:47.276 "null", 00:04:47.276 "ffdhe2048", 00:04:47.276 "ffdhe3072", 00:04:47.276 "ffdhe4096", 00:04:47.276 "ffdhe6144", 00:04:47.276 "ffdhe8192" 00:04:47.276 ] 00:04:47.276 } 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "method": "bdev_nvme_set_hotplug", 00:04:47.276 "params": { 00:04:47.276 "period_us": 100000, 00:04:47.276 "enable": false 00:04:47.276 } 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "method": "bdev_wait_for_examine" 00:04:47.276 } 00:04:47.276 ] 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "subsystem": "scsi", 00:04:47.276 "config": null 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "subsystem": "scheduler", 00:04:47.276 "config": [ 00:04:47.276 { 00:04:47.276 "method": "framework_set_scheduler", 00:04:47.276 "params": { 00:04:47.276 "name": "static" 00:04:47.276 } 00:04:47.276 } 00:04:47.276 ] 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "subsystem": "vhost_scsi", 00:04:47.276 "config": [] 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "subsystem": "vhost_blk", 00:04:47.276 "config": [] 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "subsystem": "ublk", 00:04:47.276 "config": [] 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "subsystem": "nbd", 00:04:47.276 "config": [] 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "subsystem": "nvmf", 00:04:47.276 "config": [ 00:04:47.276 { 00:04:47.276 "method": "nvmf_set_config", 00:04:47.276 "params": { 00:04:47.276 "discovery_filter": "match_any", 00:04:47.276 "admin_cmd_passthru": { 00:04:47.276 "identify_ctrlr": false 00:04:47.276 }, 00:04:47.276 "dhchap_digests": [ 00:04:47.276 "sha256", 00:04:47.276 "sha384", 00:04:47.276 "sha512" 00:04:47.276 ], 00:04:47.276 "dhchap_dhgroups": [ 00:04:47.276 "null", 00:04:47.276 "ffdhe2048", 00:04:47.276 "ffdhe3072", 00:04:47.276 "ffdhe4096", 00:04:47.276 "ffdhe6144", 00:04:47.276 "ffdhe8192" 00:04:47.276 ] 00:04:47.276 } 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "method": "nvmf_set_max_subsystems", 00:04:47.276 "params": { 00:04:47.276 "max_subsystems": 1024 00:04:47.276 } 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "method": "nvmf_set_crdt", 00:04:47.276 "params": { 00:04:47.276 "crdt1": 0, 00:04:47.276 "crdt2": 0, 00:04:47.276 "crdt3": 0 00:04:47.276 } 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "method": "nvmf_create_transport", 00:04:47.276 "params": { 00:04:47.276 "trtype": "TCP", 00:04:47.276 "max_queue_depth": 128, 00:04:47.276 "max_io_qpairs_per_ctrlr": 127, 00:04:47.276 "in_capsule_data_size": 4096, 00:04:47.276 "max_io_size": 131072, 00:04:47.276 "io_unit_size": 131072, 00:04:47.276 "max_aq_depth": 128, 00:04:47.276 "num_shared_buffers": 511, 00:04:47.276 "buf_cache_size": 4294967295, 00:04:47.276 "dif_insert_or_strip": false, 00:04:47.276 "zcopy": false, 00:04:47.276 "c2h_success": true, 00:04:47.276 "sock_priority": 0, 00:04:47.276 "abort_timeout_sec": 1, 00:04:47.276 "ack_timeout": 0, 00:04:47.276 "data_wr_pool_size": 0 00:04:47.276 } 00:04:47.276 } 00:04:47.276 ] 00:04:47.276 }, 00:04:47.276 { 00:04:47.276 "subsystem": "iscsi", 00:04:47.276 "config": [ 00:04:47.276 { 00:04:47.276 "method": "iscsi_set_options", 00:04:47.276 "params": { 00:04:47.276 "node_base": "iqn.2016-06.io.spdk", 00:04:47.276 "max_sessions": 128, 00:04:47.276 "max_connections_per_session": 2, 00:04:47.276 "max_queue_depth": 64, 00:04:47.276 "default_time2wait": 2, 00:04:47.276 "default_time2retain": 20, 00:04:47.276 "first_burst_length": 8192, 00:04:47.276 "immediate_data": true, 00:04:47.276 "allow_duplicated_isid": false, 00:04:47.276 "error_recovery_level": 0, 00:04:47.276 "nop_timeout": 60, 00:04:47.276 "nop_in_interval": 30, 00:04:47.276 "disable_chap": false, 00:04:47.276 "require_chap": false, 00:04:47.276 "mutual_chap": false, 00:04:47.276 "chap_group": 0, 00:04:47.276 "max_large_datain_per_connection": 64, 00:04:47.276 "max_r2t_per_connection": 4, 00:04:47.276 "pdu_pool_size": 36864, 00:04:47.276 "immediate_data_pool_size": 16384, 00:04:47.276 "data_out_pool_size": 2048 00:04:47.276 } 00:04:47.276 } 00:04:47.276 ] 00:04:47.276 } 00:04:47.276 ] 00:04:47.276 } 00:04:47.276 15:09:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:47.276 15:09:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58060 00:04:47.277 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58060 ']' 00:04:47.277 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58060 00:04:47.277 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:47.277 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:47.277 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58060 00:04:47.277 killing process with pid 58060 00:04:47.277 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:47.277 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:47.277 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58060' 00:04:47.277 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58060 00:04:47.277 15:09:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58060 00:04:49.812 15:09:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58116 00:04:49.812 15:09:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:49.812 15:09:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:55.078 15:09:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58116 00:04:55.078 15:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58116 ']' 00:04:55.078 15:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58116 00:04:55.078 15:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:55.078 15:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:55.078 15:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58116 00:04:55.078 15:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:55.078 killing process with pid 58116 00:04:55.078 15:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:55.078 15:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58116' 00:04:55.078 15:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58116 00:04:55.078 15:09:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58116 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:57.613 ************************************ 00:04:57.613 END TEST skip_rpc_with_json 00:04:57.613 ************************************ 00:04:57.613 00:04:57.613 real 0m11.333s 00:04:57.613 user 0m10.763s 00:04:57.613 sys 0m0.921s 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.613 15:09:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:57.613 15:09:39 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.613 15:09:39 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.613 15:09:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.613 ************************************ 00:04:57.613 START TEST skip_rpc_with_delay 00:04:57.613 ************************************ 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.613 [2024-10-25 15:09:39.908314] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:57.613 00:04:57.613 real 0m0.174s 00:04:57.613 user 0m0.083s 00:04:57.613 sys 0m0.090s 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:57.613 15:09:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:57.613 ************************************ 00:04:57.613 END TEST skip_rpc_with_delay 00:04:57.613 ************************************ 00:04:57.613 15:09:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:57.613 15:09:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:57.613 15:09:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:57.614 15:09:40 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:57.614 15:09:40 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:57.614 15:09:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.614 ************************************ 00:04:57.614 START TEST exit_on_failed_rpc_init 00:04:57.614 ************************************ 00:04:57.614 15:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:57.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.614 15:09:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58250 00:04:57.614 15:09:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58250 00:04:57.614 15:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 58250 ']' 00:04:57.614 15:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.614 15:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:57.614 15:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.614 15:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:57.614 15:09:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:57.614 15:09:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.614 [2024-10-25 15:09:40.157672] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:04:57.614 [2024-10-25 15:09:40.157805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58250 ] 00:04:57.873 [2024-10-25 15:09:40.341191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.873 [2024-10-25 15:09:40.460685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.814 15:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.814 15:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:58.814 15:09:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.814 15:09:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.814 15:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:58.814 15:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.814 15:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.814 15:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.814 15:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.814 15:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.814 15:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.814 15:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:58.814 15:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.814 15:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:58.814 15:09:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.814 [2024-10-25 15:09:41.455058] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:04:58.814 [2024-10-25 15:09:41.455537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58273 ] 00:04:59.074 [2024-10-25 15:09:41.663972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.333 [2024-10-25 15:09:41.808810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.333 [2024-10-25 15:09:41.808923] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:59.333 [2024-10-25 15:09:41.808941] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:59.333 [2024-10-25 15:09:41.808962] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:59.592 15:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:59.592 15:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:59.592 15:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:59.592 15:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:59.592 15:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:59.592 15:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:59.592 15:09:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:59.592 15:09:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58250 00:04:59.592 15:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 58250 ']' 00:04:59.592 15:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 58250 00:04:59.592 15:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:59.592 15:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:59.592 15:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58250 00:04:59.592 15:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:59.592 killing process with pid 58250 00:04:59.592 15:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:59.592 15:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58250' 00:04:59.592 15:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 58250 00:04:59.592 15:09:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 58250 00:05:02.126 00:05:02.126 real 0m4.445s 00:05:02.126 user 0m4.821s 00:05:02.126 sys 0m0.647s 00:05:02.126 15:09:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.126 15:09:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:02.126 ************************************ 00:05:02.126 END TEST exit_on_failed_rpc_init 00:05:02.126 ************************************ 00:05:02.126 15:09:44 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:02.126 ************************************ 00:05:02.126 END TEST skip_rpc 00:05:02.126 ************************************ 00:05:02.126 00:05:02.126 real 0m23.941s 00:05:02.126 user 0m22.833s 00:05:02.126 sys 0m2.404s 00:05:02.126 15:09:44 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.126 15:09:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.126 15:09:44 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:02.126 15:09:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.126 15:09:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.126 15:09:44 -- common/autotest_common.sh@10 -- # set +x 00:05:02.126 ************************************ 00:05:02.126 START TEST rpc_client 00:05:02.126 ************************************ 00:05:02.126 15:09:44 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:02.126 * Looking for test storage... 00:05:02.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:02.126 15:09:44 rpc_client -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:02.126 15:09:44 rpc_client -- common/autotest_common.sh@1689 -- # lcov --version 00:05:02.126 15:09:44 rpc_client -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:02.126 15:09:44 rpc_client -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.126 15:09:44 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:02.126 15:09:44 rpc_client -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.126 15:09:44 rpc_client -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:02.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.126 --rc genhtml_branch_coverage=1 00:05:02.126 --rc genhtml_function_coverage=1 00:05:02.126 --rc genhtml_legend=1 00:05:02.126 --rc geninfo_all_blocks=1 00:05:02.126 --rc geninfo_unexecuted_blocks=1 00:05:02.126 00:05:02.126 ' 00:05:02.126 15:09:44 rpc_client -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:02.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.126 --rc genhtml_branch_coverage=1 00:05:02.126 --rc genhtml_function_coverage=1 00:05:02.126 --rc genhtml_legend=1 00:05:02.126 --rc geninfo_all_blocks=1 00:05:02.126 --rc geninfo_unexecuted_blocks=1 00:05:02.126 00:05:02.126 ' 00:05:02.126 15:09:44 rpc_client -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:02.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.126 --rc genhtml_branch_coverage=1 00:05:02.126 --rc genhtml_function_coverage=1 00:05:02.126 --rc genhtml_legend=1 00:05:02.126 --rc geninfo_all_blocks=1 00:05:02.126 --rc geninfo_unexecuted_blocks=1 00:05:02.126 00:05:02.126 ' 00:05:02.126 15:09:44 rpc_client -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:02.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.126 --rc genhtml_branch_coverage=1 00:05:02.126 --rc genhtml_function_coverage=1 00:05:02.126 --rc genhtml_legend=1 00:05:02.126 --rc geninfo_all_blocks=1 00:05:02.126 --rc geninfo_unexecuted_blocks=1 00:05:02.126 00:05:02.126 ' 00:05:02.126 15:09:44 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:02.385 OK 00:05:02.385 15:09:44 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:02.385 00:05:02.385 real 0m0.305s 00:05:02.385 user 0m0.156s 00:05:02.385 sys 0m0.165s 00:05:02.385 15:09:44 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.385 15:09:44 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:02.385 ************************************ 00:05:02.385 END TEST rpc_client 00:05:02.385 ************************************ 00:05:02.385 15:09:44 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:02.385 15:09:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.385 15:09:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.385 15:09:44 -- common/autotest_common.sh@10 -- # set +x 00:05:02.385 ************************************ 00:05:02.385 START TEST json_config 00:05:02.385 ************************************ 00:05:02.385 15:09:44 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:02.385 15:09:45 json_config -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:02.385 15:09:45 json_config -- common/autotest_common.sh@1689 -- # lcov --version 00:05:02.385 15:09:45 json_config -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:02.644 15:09:45 json_config -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:02.644 15:09:45 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.644 15:09:45 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.644 15:09:45 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.644 15:09:45 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.644 15:09:45 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.644 15:09:45 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.644 15:09:45 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.644 15:09:45 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.644 15:09:45 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.644 15:09:45 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.644 15:09:45 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.644 15:09:45 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:02.644 15:09:45 json_config -- scripts/common.sh@345 -- # : 1 00:05:02.644 15:09:45 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.644 15:09:45 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.644 15:09:45 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:02.644 15:09:45 json_config -- scripts/common.sh@353 -- # local d=1 00:05:02.644 15:09:45 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.644 15:09:45 json_config -- scripts/common.sh@355 -- # echo 1 00:05:02.644 15:09:45 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.644 15:09:45 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:02.644 15:09:45 json_config -- scripts/common.sh@353 -- # local d=2 00:05:02.644 15:09:45 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.644 15:09:45 json_config -- scripts/common.sh@355 -- # echo 2 00:05:02.644 15:09:45 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.644 15:09:45 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.644 15:09:45 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.644 15:09:45 json_config -- scripts/common.sh@368 -- # return 0 00:05:02.644 15:09:45 json_config -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.644 15:09:45 json_config -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:02.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.644 --rc genhtml_branch_coverage=1 00:05:02.644 --rc genhtml_function_coverage=1 00:05:02.644 --rc genhtml_legend=1 00:05:02.644 --rc geninfo_all_blocks=1 00:05:02.644 --rc geninfo_unexecuted_blocks=1 00:05:02.644 00:05:02.644 ' 00:05:02.644 15:09:45 json_config -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:02.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.644 --rc genhtml_branch_coverage=1 00:05:02.644 --rc genhtml_function_coverage=1 00:05:02.644 --rc genhtml_legend=1 00:05:02.644 --rc geninfo_all_blocks=1 00:05:02.644 --rc geninfo_unexecuted_blocks=1 00:05:02.644 00:05:02.644 ' 00:05:02.644 15:09:45 json_config -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:02.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.644 --rc genhtml_branch_coverage=1 00:05:02.644 --rc genhtml_function_coverage=1 00:05:02.644 --rc genhtml_legend=1 00:05:02.644 --rc geninfo_all_blocks=1 00:05:02.644 --rc geninfo_unexecuted_blocks=1 00:05:02.644 00:05:02.644 ' 00:05:02.644 15:09:45 json_config -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:02.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.644 --rc genhtml_branch_coverage=1 00:05:02.644 --rc genhtml_function_coverage=1 00:05:02.644 --rc genhtml_legend=1 00:05:02.644 --rc geninfo_all_blocks=1 00:05:02.644 --rc geninfo_unexecuted_blocks=1 00:05:02.644 00:05:02.644 ' 00:05:02.644 15:09:45 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04e3e96a-6339-4098-b753-e8ed47e36634 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=04e3e96a-6339-4098-b753-e8ed47e36634 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:02.644 15:09:45 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:02.644 15:09:45 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.644 15:09:45 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.644 15:09:45 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.644 15:09:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.644 15:09:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.644 15:09:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.644 15:09:45 json_config -- paths/export.sh@5 -- # export PATH 00:05:02.644 15:09:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@51 -- # : 0 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:02.644 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:02.644 15:09:45 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:02.644 15:09:45 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:02.644 15:09:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:02.644 15:09:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:02.644 15:09:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:02.644 15:09:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:02.644 15:09:45 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:02.644 WARNING: No tests are enabled so not running JSON configuration tests 00:05:02.644 15:09:45 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:02.644 00:05:02.644 real 0m0.219s 00:05:02.644 user 0m0.122s 00:05:02.644 sys 0m0.101s 00:05:02.644 15:09:45 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.644 15:09:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.644 ************************************ 00:05:02.644 END TEST json_config 00:05:02.644 ************************************ 00:05:02.644 15:09:45 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:02.644 15:09:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.644 15:09:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.644 15:09:45 -- common/autotest_common.sh@10 -- # set +x 00:05:02.644 ************************************ 00:05:02.644 START TEST json_config_extra_key 00:05:02.644 ************************************ 00:05:02.644 15:09:45 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:02.644 15:09:45 json_config_extra_key -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:02.644 15:09:45 json_config_extra_key -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:02.644 15:09:45 json_config_extra_key -- common/autotest_common.sh@1689 -- # lcov --version 00:05:02.903 15:09:45 json_config_extra_key -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:02.903 15:09:45 json_config_extra_key -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.903 15:09:45 json_config_extra_key -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:02.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.903 --rc genhtml_branch_coverage=1 00:05:02.903 --rc genhtml_function_coverage=1 00:05:02.903 --rc genhtml_legend=1 00:05:02.903 --rc geninfo_all_blocks=1 00:05:02.903 --rc geninfo_unexecuted_blocks=1 00:05:02.903 00:05:02.903 ' 00:05:02.903 15:09:45 json_config_extra_key -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:02.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.903 --rc genhtml_branch_coverage=1 00:05:02.903 --rc genhtml_function_coverage=1 00:05:02.903 --rc genhtml_legend=1 00:05:02.903 --rc geninfo_all_blocks=1 00:05:02.903 --rc geninfo_unexecuted_blocks=1 00:05:02.903 00:05:02.903 ' 00:05:02.903 15:09:45 json_config_extra_key -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:02.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.903 --rc genhtml_branch_coverage=1 00:05:02.903 --rc genhtml_function_coverage=1 00:05:02.903 --rc genhtml_legend=1 00:05:02.903 --rc geninfo_all_blocks=1 00:05:02.903 --rc geninfo_unexecuted_blocks=1 00:05:02.903 00:05:02.903 ' 00:05:02.903 15:09:45 json_config_extra_key -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:02.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.903 --rc genhtml_branch_coverage=1 00:05:02.903 --rc genhtml_function_coverage=1 00:05:02.903 --rc genhtml_legend=1 00:05:02.903 --rc geninfo_all_blocks=1 00:05:02.903 --rc geninfo_unexecuted_blocks=1 00:05:02.903 00:05:02.903 ' 00:05:02.903 15:09:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:04e3e96a-6339-4098-b753-e8ed47e36634 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=04e3e96a-6339-4098-b753-e8ed47e36634 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:02.903 15:09:45 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:02.903 15:09:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.903 15:09:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.903 15:09:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.903 15:09:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:02.903 15:09:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:02.903 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:02.903 15:09:45 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:02.903 15:09:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:02.903 15:09:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:02.903 15:09:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:02.903 15:09:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:02.903 15:09:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:02.903 15:09:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:02.903 15:09:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:02.903 15:09:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:02.903 15:09:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:02.903 15:09:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:02.903 15:09:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:02.903 INFO: launching applications... 00:05:02.903 15:09:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:02.903 15:09:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:02.903 15:09:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:02.903 15:09:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:02.903 15:09:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:02.903 15:09:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:02.903 15:09:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.903 15:09:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:02.903 15:09:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58483 00:05:02.903 15:09:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:02.903 Waiting for target to run... 00:05:02.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:02.903 15:09:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58483 /var/tmp/spdk_tgt.sock 00:05:02.903 15:09:45 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 58483 ']' 00:05:02.904 15:09:45 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:02.904 15:09:45 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.904 15:09:45 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:02.904 15:09:45 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:02.904 15:09:45 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.904 15:09:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:02.904 [2024-10-25 15:09:45.623195] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:05:02.904 [2024-10-25 15:09:45.623518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58483 ] 00:05:03.472 [2024-10-25 15:09:46.185119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.730 [2024-10-25 15:09:46.295681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.299 00:05:04.299 INFO: shutting down applications... 00:05:04.299 15:09:47 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.299 15:09:47 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:04.299 15:09:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:04.299 15:09:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:04.299 15:09:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:04.299 15:09:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:04.299 15:09:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:04.299 15:09:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58483 ]] 00:05:04.299 15:09:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58483 00:05:04.299 15:09:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:04.299 15:09:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.299 15:09:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58483 00:05:04.299 15:09:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.866 15:09:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.866 15:09:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.866 15:09:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58483 00:05:04.866 15:09:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.433 15:09:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.433 15:09:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.433 15:09:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58483 00:05:05.433 15:09:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.050 15:09:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.050 15:09:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.050 15:09:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58483 00:05:06.050 15:09:48 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.310 15:09:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.310 15:09:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.310 15:09:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58483 00:05:06.310 15:09:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.878 15:09:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.878 15:09:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.878 15:09:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58483 00:05:06.878 15:09:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.447 15:09:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.447 15:09:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.447 15:09:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58483 00:05:07.447 SPDK target shutdown done 00:05:07.447 Success 00:05:07.447 15:09:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:07.447 15:09:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:07.447 15:09:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:07.447 15:09:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:07.447 15:09:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:07.447 00:05:07.447 real 0m4.765s 00:05:07.447 user 0m4.026s 00:05:07.447 sys 0m0.755s 00:05:07.447 15:09:50 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.447 15:09:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:07.447 ************************************ 00:05:07.447 END TEST json_config_extra_key 00:05:07.447 ************************************ 00:05:07.447 15:09:50 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.447 15:09:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.447 15:09:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.447 15:09:50 -- common/autotest_common.sh@10 -- # set +x 00:05:07.447 ************************************ 00:05:07.447 START TEST alias_rpc 00:05:07.447 ************************************ 00:05:07.447 15:09:50 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.706 * Looking for test storage... 00:05:07.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:07.706 15:09:50 alias_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:07.706 15:09:50 alias_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:05:07.706 15:09:50 alias_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:07.706 15:09:50 alias_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.706 15:09:50 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:07.706 15:09:50 alias_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.706 15:09:50 alias_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:07.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.706 --rc genhtml_branch_coverage=1 00:05:07.706 --rc genhtml_function_coverage=1 00:05:07.706 --rc genhtml_legend=1 00:05:07.706 --rc geninfo_all_blocks=1 00:05:07.706 --rc geninfo_unexecuted_blocks=1 00:05:07.706 00:05:07.706 ' 00:05:07.706 15:09:50 alias_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:07.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.706 --rc genhtml_branch_coverage=1 00:05:07.706 --rc genhtml_function_coverage=1 00:05:07.706 --rc genhtml_legend=1 00:05:07.706 --rc geninfo_all_blocks=1 00:05:07.706 --rc geninfo_unexecuted_blocks=1 00:05:07.706 00:05:07.706 ' 00:05:07.706 15:09:50 alias_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:07.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.706 --rc genhtml_branch_coverage=1 00:05:07.706 --rc genhtml_function_coverage=1 00:05:07.706 --rc genhtml_legend=1 00:05:07.706 --rc geninfo_all_blocks=1 00:05:07.706 --rc geninfo_unexecuted_blocks=1 00:05:07.706 00:05:07.706 ' 00:05:07.706 15:09:50 alias_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:07.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.706 --rc genhtml_branch_coverage=1 00:05:07.706 --rc genhtml_function_coverage=1 00:05:07.706 --rc genhtml_legend=1 00:05:07.706 --rc geninfo_all_blocks=1 00:05:07.706 --rc geninfo_unexecuted_blocks=1 00:05:07.706 00:05:07.706 ' 00:05:07.706 15:09:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:07.706 15:09:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58595 00:05:07.706 15:09:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:07.706 15:09:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58595 00:05:07.706 15:09:50 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 58595 ']' 00:05:07.706 15:09:50 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.706 15:09:50 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:07.706 15:09:50 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.706 15:09:50 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:07.706 15:09:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.965 [2024-10-25 15:09:50.485872] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:05:07.965 [2024-10-25 15:09:50.486270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58595 ] 00:05:07.965 [2024-10-25 15:09:50.672912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.226 [2024-10-25 15:09:50.788601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.174 15:09:51 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.174 15:09:51 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:09.174 15:09:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:09.174 15:09:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58595 00:05:09.174 15:09:51 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 58595 ']' 00:05:09.174 15:09:51 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 58595 00:05:09.175 15:09:51 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:09.175 15:09:51 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.175 15:09:51 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58595 00:05:09.434 15:09:51 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:09.434 15:09:51 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:09.434 15:09:51 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58595' 00:05:09.434 killing process with pid 58595 00:05:09.434 15:09:51 alias_rpc -- common/autotest_common.sh@969 -- # kill 58595 00:05:09.434 15:09:51 alias_rpc -- common/autotest_common.sh@974 -- # wait 58595 00:05:11.968 ************************************ 00:05:11.968 END TEST alias_rpc 00:05:11.968 ************************************ 00:05:11.968 00:05:11.968 real 0m4.174s 00:05:11.968 user 0m4.096s 00:05:11.968 sys 0m0.651s 00:05:11.968 15:09:54 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.968 15:09:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.968 15:09:54 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:11.968 15:09:54 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:11.968 15:09:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.968 15:09:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.968 15:09:54 -- common/autotest_common.sh@10 -- # set +x 00:05:11.968 ************************************ 00:05:11.968 START TEST spdkcli_tcp 00:05:11.968 ************************************ 00:05:11.968 15:09:54 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:11.968 * Looking for test storage... 00:05:11.968 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:11.968 15:09:54 spdkcli_tcp -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:11.968 15:09:54 spdkcli_tcp -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:11.968 15:09:54 spdkcli_tcp -- common/autotest_common.sh@1689 -- # lcov --version 00:05:11.968 15:09:54 spdkcli_tcp -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.968 15:09:54 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:11.968 15:09:54 spdkcli_tcp -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.968 15:09:54 spdkcli_tcp -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:11.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.968 --rc genhtml_branch_coverage=1 00:05:11.968 --rc genhtml_function_coverage=1 00:05:11.968 --rc genhtml_legend=1 00:05:11.968 --rc geninfo_all_blocks=1 00:05:11.968 --rc geninfo_unexecuted_blocks=1 00:05:11.968 00:05:11.968 ' 00:05:11.968 15:09:54 spdkcli_tcp -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:11.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.968 --rc genhtml_branch_coverage=1 00:05:11.968 --rc genhtml_function_coverage=1 00:05:11.968 --rc genhtml_legend=1 00:05:11.968 --rc geninfo_all_blocks=1 00:05:11.968 --rc geninfo_unexecuted_blocks=1 00:05:11.968 00:05:11.968 ' 00:05:11.968 15:09:54 spdkcli_tcp -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:11.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.968 --rc genhtml_branch_coverage=1 00:05:11.968 --rc genhtml_function_coverage=1 00:05:11.968 --rc genhtml_legend=1 00:05:11.968 --rc geninfo_all_blocks=1 00:05:11.968 --rc geninfo_unexecuted_blocks=1 00:05:11.968 00:05:11.968 ' 00:05:11.968 15:09:54 spdkcli_tcp -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:11.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.968 --rc genhtml_branch_coverage=1 00:05:11.968 --rc genhtml_function_coverage=1 00:05:11.968 --rc genhtml_legend=1 00:05:11.968 --rc geninfo_all_blocks=1 00:05:11.968 --rc geninfo_unexecuted_blocks=1 00:05:11.968 00:05:11.968 ' 00:05:11.968 15:09:54 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:11.968 15:09:54 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:11.968 15:09:54 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:11.968 15:09:54 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:11.968 15:09:54 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:11.968 15:09:54 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:11.968 15:09:54 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:11.968 15:09:54 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:11.968 15:09:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.968 15:09:54 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58702 00:05:11.968 15:09:54 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58702 00:05:11.968 15:09:54 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:11.968 15:09:54 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 58702 ']' 00:05:11.968 15:09:54 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.968 15:09:54 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.968 15:09:54 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.968 15:09:54 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.968 15:09:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.227 [2024-10-25 15:09:54.707079] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:05:12.227 [2024-10-25 15:09:54.707422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58702 ] 00:05:12.227 [2024-10-25 15:09:54.888953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.486 [2024-10-25 15:09:55.010068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.486 [2024-10-25 15:09:55.010102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.422 15:09:55 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.422 15:09:55 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:13.422 15:09:55 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58719 00:05:13.422 15:09:55 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:13.422 15:09:55 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:13.422 [ 00:05:13.422 "bdev_malloc_delete", 00:05:13.422 "bdev_malloc_create", 00:05:13.422 "bdev_null_resize", 00:05:13.422 "bdev_null_delete", 00:05:13.422 "bdev_null_create", 00:05:13.422 "bdev_nvme_cuse_unregister", 00:05:13.422 "bdev_nvme_cuse_register", 00:05:13.422 "bdev_opal_new_user", 00:05:13.422 "bdev_opal_set_lock_state", 00:05:13.422 "bdev_opal_delete", 00:05:13.422 "bdev_opal_get_info", 00:05:13.422 "bdev_opal_create", 00:05:13.422 "bdev_nvme_opal_revert", 00:05:13.422 "bdev_nvme_opal_init", 00:05:13.422 "bdev_nvme_send_cmd", 00:05:13.422 "bdev_nvme_set_keys", 00:05:13.422 "bdev_nvme_get_path_iostat", 00:05:13.422 "bdev_nvme_get_mdns_discovery_info", 00:05:13.422 "bdev_nvme_stop_mdns_discovery", 00:05:13.422 "bdev_nvme_start_mdns_discovery", 00:05:13.422 "bdev_nvme_set_multipath_policy", 00:05:13.422 "bdev_nvme_set_preferred_path", 00:05:13.422 "bdev_nvme_get_io_paths", 00:05:13.422 "bdev_nvme_remove_error_injection", 00:05:13.422 "bdev_nvme_add_error_injection", 00:05:13.422 "bdev_nvme_get_discovery_info", 00:05:13.422 "bdev_nvme_stop_discovery", 00:05:13.422 "bdev_nvme_start_discovery", 00:05:13.422 "bdev_nvme_get_controller_health_info", 00:05:13.422 "bdev_nvme_disable_controller", 00:05:13.422 "bdev_nvme_enable_controller", 00:05:13.422 "bdev_nvme_reset_controller", 00:05:13.422 "bdev_nvme_get_transport_statistics", 00:05:13.422 "bdev_nvme_apply_firmware", 00:05:13.422 "bdev_nvme_detach_controller", 00:05:13.422 "bdev_nvme_get_controllers", 00:05:13.422 "bdev_nvme_attach_controller", 00:05:13.422 "bdev_nvme_set_hotplug", 00:05:13.422 "bdev_nvme_set_options", 00:05:13.422 "bdev_passthru_delete", 00:05:13.422 "bdev_passthru_create", 00:05:13.422 "bdev_lvol_set_parent_bdev", 00:05:13.422 "bdev_lvol_set_parent", 00:05:13.422 "bdev_lvol_check_shallow_copy", 00:05:13.422 "bdev_lvol_start_shallow_copy", 00:05:13.422 "bdev_lvol_grow_lvstore", 00:05:13.422 "bdev_lvol_get_lvols", 00:05:13.422 "bdev_lvol_get_lvstores", 00:05:13.422 "bdev_lvol_delete", 00:05:13.422 "bdev_lvol_set_read_only", 00:05:13.422 "bdev_lvol_resize", 00:05:13.422 "bdev_lvol_decouple_parent", 00:05:13.422 "bdev_lvol_inflate", 00:05:13.422 "bdev_lvol_rename", 00:05:13.422 "bdev_lvol_clone_bdev", 00:05:13.422 "bdev_lvol_clone", 00:05:13.422 "bdev_lvol_snapshot", 00:05:13.422 "bdev_lvol_create", 00:05:13.422 "bdev_lvol_delete_lvstore", 00:05:13.422 "bdev_lvol_rename_lvstore", 00:05:13.422 "bdev_lvol_create_lvstore", 00:05:13.422 "bdev_raid_set_options", 00:05:13.422 "bdev_raid_remove_base_bdev", 00:05:13.422 "bdev_raid_add_base_bdev", 00:05:13.422 "bdev_raid_delete", 00:05:13.422 "bdev_raid_create", 00:05:13.422 "bdev_raid_get_bdevs", 00:05:13.422 "bdev_error_inject_error", 00:05:13.422 "bdev_error_delete", 00:05:13.422 "bdev_error_create", 00:05:13.422 "bdev_split_delete", 00:05:13.422 "bdev_split_create", 00:05:13.422 "bdev_delay_delete", 00:05:13.422 "bdev_delay_create", 00:05:13.422 "bdev_delay_update_latency", 00:05:13.422 "bdev_zone_block_delete", 00:05:13.422 "bdev_zone_block_create", 00:05:13.422 "blobfs_create", 00:05:13.422 "blobfs_detect", 00:05:13.422 "blobfs_set_cache_size", 00:05:13.422 "bdev_xnvme_delete", 00:05:13.422 "bdev_xnvme_create", 00:05:13.422 "bdev_aio_delete", 00:05:13.422 "bdev_aio_rescan", 00:05:13.422 "bdev_aio_create", 00:05:13.422 "bdev_ftl_set_property", 00:05:13.422 "bdev_ftl_get_properties", 00:05:13.422 "bdev_ftl_get_stats", 00:05:13.422 "bdev_ftl_unmap", 00:05:13.422 "bdev_ftl_unload", 00:05:13.422 "bdev_ftl_delete", 00:05:13.422 "bdev_ftl_load", 00:05:13.422 "bdev_ftl_create", 00:05:13.422 "bdev_virtio_attach_controller", 00:05:13.422 "bdev_virtio_scsi_get_devices", 00:05:13.422 "bdev_virtio_detach_controller", 00:05:13.422 "bdev_virtio_blk_set_hotplug", 00:05:13.422 "bdev_iscsi_delete", 00:05:13.422 "bdev_iscsi_create", 00:05:13.422 "bdev_iscsi_set_options", 00:05:13.422 "accel_error_inject_error", 00:05:13.422 "ioat_scan_accel_module", 00:05:13.422 "dsa_scan_accel_module", 00:05:13.422 "iaa_scan_accel_module", 00:05:13.422 "keyring_file_remove_key", 00:05:13.422 "keyring_file_add_key", 00:05:13.422 "keyring_linux_set_options", 00:05:13.422 "fsdev_aio_delete", 00:05:13.422 "fsdev_aio_create", 00:05:13.422 "iscsi_get_histogram", 00:05:13.422 "iscsi_enable_histogram", 00:05:13.422 "iscsi_set_options", 00:05:13.422 "iscsi_get_auth_groups", 00:05:13.422 "iscsi_auth_group_remove_secret", 00:05:13.422 "iscsi_auth_group_add_secret", 00:05:13.422 "iscsi_delete_auth_group", 00:05:13.422 "iscsi_create_auth_group", 00:05:13.422 "iscsi_set_discovery_auth", 00:05:13.422 "iscsi_get_options", 00:05:13.422 "iscsi_target_node_request_logout", 00:05:13.422 "iscsi_target_node_set_redirect", 00:05:13.422 "iscsi_target_node_set_auth", 00:05:13.422 "iscsi_target_node_add_lun", 00:05:13.422 "iscsi_get_stats", 00:05:13.422 "iscsi_get_connections", 00:05:13.422 "iscsi_portal_group_set_auth", 00:05:13.422 "iscsi_start_portal_group", 00:05:13.422 "iscsi_delete_portal_group", 00:05:13.422 "iscsi_create_portal_group", 00:05:13.422 "iscsi_get_portal_groups", 00:05:13.422 "iscsi_delete_target_node", 00:05:13.422 "iscsi_target_node_remove_pg_ig_maps", 00:05:13.422 "iscsi_target_node_add_pg_ig_maps", 00:05:13.422 "iscsi_create_target_node", 00:05:13.422 "iscsi_get_target_nodes", 00:05:13.422 "iscsi_delete_initiator_group", 00:05:13.422 "iscsi_initiator_group_remove_initiators", 00:05:13.422 "iscsi_initiator_group_add_initiators", 00:05:13.422 "iscsi_create_initiator_group", 00:05:13.422 "iscsi_get_initiator_groups", 00:05:13.422 "nvmf_set_crdt", 00:05:13.422 "nvmf_set_config", 00:05:13.422 "nvmf_set_max_subsystems", 00:05:13.422 "nvmf_stop_mdns_prr", 00:05:13.422 "nvmf_publish_mdns_prr", 00:05:13.422 "nvmf_subsystem_get_listeners", 00:05:13.422 "nvmf_subsystem_get_qpairs", 00:05:13.422 "nvmf_subsystem_get_controllers", 00:05:13.422 "nvmf_get_stats", 00:05:13.422 "nvmf_get_transports", 00:05:13.422 "nvmf_create_transport", 00:05:13.422 "nvmf_get_targets", 00:05:13.422 "nvmf_delete_target", 00:05:13.422 "nvmf_create_target", 00:05:13.422 "nvmf_subsystem_allow_any_host", 00:05:13.422 "nvmf_subsystem_set_keys", 00:05:13.422 "nvmf_subsystem_remove_host", 00:05:13.422 "nvmf_subsystem_add_host", 00:05:13.422 "nvmf_ns_remove_host", 00:05:13.422 "nvmf_ns_add_host", 00:05:13.422 "nvmf_subsystem_remove_ns", 00:05:13.422 "nvmf_subsystem_set_ns_ana_group", 00:05:13.422 "nvmf_subsystem_add_ns", 00:05:13.422 "nvmf_subsystem_listener_set_ana_state", 00:05:13.422 "nvmf_discovery_get_referrals", 00:05:13.422 "nvmf_discovery_remove_referral", 00:05:13.422 "nvmf_discovery_add_referral", 00:05:13.422 "nvmf_subsystem_remove_listener", 00:05:13.422 "nvmf_subsystem_add_listener", 00:05:13.422 "nvmf_delete_subsystem", 00:05:13.422 "nvmf_create_subsystem", 00:05:13.422 "nvmf_get_subsystems", 00:05:13.422 "env_dpdk_get_mem_stats", 00:05:13.422 "nbd_get_disks", 00:05:13.422 "nbd_stop_disk", 00:05:13.422 "nbd_start_disk", 00:05:13.422 "ublk_recover_disk", 00:05:13.422 "ublk_get_disks", 00:05:13.422 "ublk_stop_disk", 00:05:13.422 "ublk_start_disk", 00:05:13.422 "ublk_destroy_target", 00:05:13.422 "ublk_create_target", 00:05:13.422 "virtio_blk_create_transport", 00:05:13.422 "virtio_blk_get_transports", 00:05:13.422 "vhost_controller_set_coalescing", 00:05:13.422 "vhost_get_controllers", 00:05:13.422 "vhost_delete_controller", 00:05:13.422 "vhost_create_blk_controller", 00:05:13.422 "vhost_scsi_controller_remove_target", 00:05:13.422 "vhost_scsi_controller_add_target", 00:05:13.422 "vhost_start_scsi_controller", 00:05:13.422 "vhost_create_scsi_controller", 00:05:13.422 "thread_set_cpumask", 00:05:13.422 "scheduler_set_options", 00:05:13.422 "framework_get_governor", 00:05:13.422 "framework_get_scheduler", 00:05:13.422 "framework_set_scheduler", 00:05:13.422 "framework_get_reactors", 00:05:13.422 "thread_get_io_channels", 00:05:13.422 "thread_get_pollers", 00:05:13.422 "thread_get_stats", 00:05:13.422 "framework_monitor_context_switch", 00:05:13.422 "spdk_kill_instance", 00:05:13.422 "log_enable_timestamps", 00:05:13.422 "log_get_flags", 00:05:13.422 "log_clear_flag", 00:05:13.422 "log_set_flag", 00:05:13.422 "log_get_level", 00:05:13.422 "log_set_level", 00:05:13.422 "log_get_print_level", 00:05:13.422 "log_set_print_level", 00:05:13.422 "framework_enable_cpumask_locks", 00:05:13.423 "framework_disable_cpumask_locks", 00:05:13.423 "framework_wait_init", 00:05:13.423 "framework_start_init", 00:05:13.423 "scsi_get_devices", 00:05:13.423 "bdev_get_histogram", 00:05:13.423 "bdev_enable_histogram", 00:05:13.423 "bdev_set_qos_limit", 00:05:13.423 "bdev_set_qd_sampling_period", 00:05:13.423 "bdev_get_bdevs", 00:05:13.423 "bdev_reset_iostat", 00:05:13.423 "bdev_get_iostat", 00:05:13.423 "bdev_examine", 00:05:13.423 "bdev_wait_for_examine", 00:05:13.423 "bdev_set_options", 00:05:13.423 "accel_get_stats", 00:05:13.423 "accel_set_options", 00:05:13.423 "accel_set_driver", 00:05:13.423 "accel_crypto_key_destroy", 00:05:13.423 "accel_crypto_keys_get", 00:05:13.423 "accel_crypto_key_create", 00:05:13.423 "accel_assign_opc", 00:05:13.423 "accel_get_module_info", 00:05:13.423 "accel_get_opc_assignments", 00:05:13.423 "vmd_rescan", 00:05:13.423 "vmd_remove_device", 00:05:13.423 "vmd_enable", 00:05:13.423 "sock_get_default_impl", 00:05:13.423 "sock_set_default_impl", 00:05:13.423 "sock_impl_set_options", 00:05:13.423 "sock_impl_get_options", 00:05:13.423 "iobuf_get_stats", 00:05:13.423 "iobuf_set_options", 00:05:13.423 "keyring_get_keys", 00:05:13.423 "framework_get_pci_devices", 00:05:13.423 "framework_get_config", 00:05:13.423 "framework_get_subsystems", 00:05:13.423 "fsdev_set_opts", 00:05:13.423 "fsdev_get_opts", 00:05:13.423 "trace_get_info", 00:05:13.423 "trace_get_tpoint_group_mask", 00:05:13.423 "trace_disable_tpoint_group", 00:05:13.423 "trace_enable_tpoint_group", 00:05:13.423 "trace_clear_tpoint_mask", 00:05:13.423 "trace_set_tpoint_mask", 00:05:13.423 "notify_get_notifications", 00:05:13.423 "notify_get_types", 00:05:13.423 "spdk_get_version", 00:05:13.423 "rpc_get_methods" 00:05:13.423 ] 00:05:13.423 15:09:56 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:13.423 15:09:56 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:13.423 15:09:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.423 15:09:56 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:13.423 15:09:56 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58702 00:05:13.423 15:09:56 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 58702 ']' 00:05:13.423 15:09:56 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 58702 00:05:13.423 15:09:56 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:13.423 15:09:56 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.423 15:09:56 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58702 00:05:13.718 15:09:56 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.718 killing process with pid 58702 00:05:13.718 15:09:56 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.718 15:09:56 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58702' 00:05:13.718 15:09:56 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 58702 00:05:13.718 15:09:56 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 58702 00:05:16.251 ************************************ 00:05:16.251 END TEST spdkcli_tcp 00:05:16.251 ************************************ 00:05:16.251 00:05:16.251 real 0m4.202s 00:05:16.251 user 0m7.451s 00:05:16.251 sys 0m0.652s 00:05:16.251 15:09:58 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.251 15:09:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.251 15:09:58 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:16.251 15:09:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:16.251 15:09:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.251 15:09:58 -- common/autotest_common.sh@10 -- # set +x 00:05:16.251 ************************************ 00:05:16.251 START TEST dpdk_mem_utility 00:05:16.251 ************************************ 00:05:16.251 15:09:58 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:16.251 * Looking for test storage... 00:05:16.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:16.251 15:09:58 dpdk_mem_utility -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:16.251 15:09:58 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # lcov --version 00:05:16.251 15:09:58 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:16.251 15:09:58 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:16.251 15:09:58 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.251 15:09:58 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.251 15:09:58 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.251 15:09:58 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.251 15:09:58 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.251 15:09:58 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.251 15:09:58 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.251 15:09:58 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.251 15:09:58 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.251 15:09:58 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.251 15:09:58 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.251 15:09:58 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:16.251 15:09:58 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:16.251 15:09:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.251 15:09:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.251 15:09:58 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:16.251 15:09:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:16.251 15:09:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.252 15:09:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:16.252 15:09:58 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.252 15:09:58 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:16.252 15:09:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:16.252 15:09:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.252 15:09:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:16.252 15:09:58 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.252 15:09:58 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.252 15:09:58 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.252 15:09:58 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:16.252 15:09:58 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.252 15:09:58 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:16.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.252 --rc genhtml_branch_coverage=1 00:05:16.252 --rc genhtml_function_coverage=1 00:05:16.252 --rc genhtml_legend=1 00:05:16.252 --rc geninfo_all_blocks=1 00:05:16.252 --rc geninfo_unexecuted_blocks=1 00:05:16.252 00:05:16.252 ' 00:05:16.252 15:09:58 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:16.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.252 --rc genhtml_branch_coverage=1 00:05:16.252 --rc genhtml_function_coverage=1 00:05:16.252 --rc genhtml_legend=1 00:05:16.252 --rc geninfo_all_blocks=1 00:05:16.252 --rc geninfo_unexecuted_blocks=1 00:05:16.252 00:05:16.252 ' 00:05:16.252 15:09:58 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:16.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.252 --rc genhtml_branch_coverage=1 00:05:16.252 --rc genhtml_function_coverage=1 00:05:16.252 --rc genhtml_legend=1 00:05:16.252 --rc geninfo_all_blocks=1 00:05:16.252 --rc geninfo_unexecuted_blocks=1 00:05:16.252 00:05:16.252 ' 00:05:16.252 15:09:58 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:16.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.252 --rc genhtml_branch_coverage=1 00:05:16.252 --rc genhtml_function_coverage=1 00:05:16.252 --rc genhtml_legend=1 00:05:16.252 --rc geninfo_all_blocks=1 00:05:16.252 --rc geninfo_unexecuted_blocks=1 00:05:16.252 00:05:16.252 ' 00:05:16.252 15:09:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:16.252 15:09:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58824 00:05:16.252 15:09:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.252 15:09:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58824 00:05:16.252 15:09:58 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58824 ']' 00:05:16.252 15:09:58 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.252 15:09:58 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.252 15:09:58 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.252 15:09:58 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.252 15:09:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:16.518 [2024-10-25 15:09:58.999727] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:05:16.518 [2024-10-25 15:09:59.000027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58824 ] 00:05:16.518 [2024-10-25 15:09:59.183778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.791 [2024-10-25 15:09:59.297655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.728 15:10:00 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:17.728 15:10:00 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:17.728 15:10:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:17.728 15:10:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:17.728 15:10:00 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.728 15:10:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.728 { 00:05:17.728 "filename": "/tmp/spdk_mem_dump.txt" 00:05:17.728 } 00:05:17.728 15:10:00 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.728 15:10:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:17.728 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:17.728 1 heaps totaling size 816.000000 MiB 00:05:17.728 size: 816.000000 MiB heap id: 0 00:05:17.728 end heaps---------- 00:05:17.728 9 mempools totaling size 595.772034 MiB 00:05:17.728 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:17.728 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:17.728 size: 92.545471 MiB name: bdev_io_58824 00:05:17.728 size: 50.003479 MiB name: msgpool_58824 00:05:17.728 size: 36.509338 MiB name: fsdev_io_58824 00:05:17.728 size: 21.763794 MiB name: PDU_Pool 00:05:17.728 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:17.728 size: 4.133484 MiB name: evtpool_58824 00:05:17.728 size: 0.026123 MiB name: Session_Pool 00:05:17.728 end mempools------- 00:05:17.728 6 memzones totaling size 4.142822 MiB 00:05:17.728 size: 1.000366 MiB name: RG_ring_0_58824 00:05:17.728 size: 1.000366 MiB name: RG_ring_1_58824 00:05:17.728 size: 1.000366 MiB name: RG_ring_4_58824 00:05:17.728 size: 1.000366 MiB name: RG_ring_5_58824 00:05:17.728 size: 0.125366 MiB name: RG_ring_2_58824 00:05:17.728 size: 0.015991 MiB name: RG_ring_3_58824 00:05:17.728 end memzones------- 00:05:17.728 15:10:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:17.728 heap id: 0 total size: 816.000000 MiB number of busy elements: 324 number of free elements: 18 00:05:17.728 list of free elements. size: 16.789185 MiB 00:05:17.728 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:17.728 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:17.728 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:17.728 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:17.728 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:17.728 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:17.728 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:17.728 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:17.728 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:17.728 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:17.728 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:17.728 element at address: 0x20001ac00000 with size: 0.559509 MiB 00:05:17.728 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:17.728 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:17.728 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:17.728 element at address: 0x200012c00000 with size: 0.443481 MiB 00:05:17.728 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:17.728 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:17.728 list of standard malloc elements. size: 199.289917 MiB 00:05:17.728 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:17.728 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:17.728 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:17.728 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:17.728 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:17.728 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:17.728 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:17.728 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:17.728 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:17.728 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:17.728 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:17.728 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:17.728 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:17.728 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:17.728 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:17.728 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:17.728 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:17.728 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:17.728 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:17.728 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:17.728 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:17.728 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:17.728 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:17.728 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:17.728 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:17.728 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:17.728 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:17.728 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:17.728 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:17.728 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:17.728 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:17.729 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:17.729 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:17.729 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:17.729 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:17.730 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:17.730 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:17.730 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:17.731 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:17.731 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:17.731 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:17.731 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:17.731 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:17.731 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:17.731 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:17.731 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:17.731 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:17.731 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:17.731 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:17.731 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:17.731 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:17.731 list of memzone associated elements. size: 599.920898 MiB 00:05:17.731 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:17.731 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:17.731 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:17.731 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:17.731 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:17.731 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58824_0 00:05:17.731 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:17.731 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58824_0 00:05:17.731 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:17.731 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58824_0 00:05:17.731 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:17.731 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:17.731 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:17.731 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:17.731 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:17.731 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58824_0 00:05:17.731 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:17.731 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58824 00:05:17.731 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:17.731 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58824 00:05:17.731 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:17.731 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:17.731 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:17.731 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:17.731 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:17.731 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:17.731 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:17.731 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:17.731 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:17.731 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58824 00:05:17.731 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:17.731 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58824 00:05:17.731 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:17.731 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58824 00:05:17.731 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:17.731 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58824 00:05:17.731 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:17.731 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58824 00:05:17.731 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:17.731 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58824 00:05:17.731 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:17.731 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:17.731 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:17.731 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:17.731 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:17.731 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:17.731 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:17.731 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58824 00:05:17.731 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:17.731 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58824 00:05:17.731 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:17.731 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:17.731 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:17.731 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:17.731 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:17.731 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58824 00:05:17.731 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:17.731 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:17.731 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:17.731 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58824 00:05:17.731 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:17.731 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58824 00:05:17.731 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:17.731 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58824 00:05:17.731 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:17.731 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:17.731 15:10:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:17.731 15:10:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58824 00:05:17.731 15:10:00 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58824 ']' 00:05:17.731 15:10:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58824 00:05:17.731 15:10:00 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:17.731 15:10:00 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:17.731 15:10:00 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58824 00:05:17.731 15:10:00 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:17.731 15:10:00 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:17.731 15:10:00 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58824' 00:05:17.731 killing process with pid 58824 00:05:17.731 15:10:00 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58824 00:05:17.731 15:10:00 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58824 00:05:20.262 00:05:20.262 real 0m4.082s 00:05:20.262 user 0m3.999s 00:05:20.262 sys 0m0.620s 00:05:20.262 15:10:02 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.263 15:10:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:20.263 ************************************ 00:05:20.263 END TEST dpdk_mem_utility 00:05:20.263 ************************************ 00:05:20.263 15:10:02 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:20.263 15:10:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.263 15:10:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.263 15:10:02 -- common/autotest_common.sh@10 -- # set +x 00:05:20.263 ************************************ 00:05:20.263 START TEST event 00:05:20.263 ************************************ 00:05:20.263 15:10:02 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:20.263 * Looking for test storage... 00:05:20.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:20.263 15:10:02 event -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:20.263 15:10:02 event -- common/autotest_common.sh@1689 -- # lcov --version 00:05:20.263 15:10:02 event -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:20.522 15:10:02 event -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:20.522 15:10:02 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.522 15:10:02 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.522 15:10:02 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.522 15:10:02 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.522 15:10:02 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.522 15:10:02 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.522 15:10:02 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.522 15:10:02 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.522 15:10:02 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.522 15:10:02 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.522 15:10:02 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.522 15:10:03 event -- scripts/common.sh@344 -- # case "$op" in 00:05:20.522 15:10:03 event -- scripts/common.sh@345 -- # : 1 00:05:20.522 15:10:03 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.522 15:10:03 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.522 15:10:03 event -- scripts/common.sh@365 -- # decimal 1 00:05:20.522 15:10:03 event -- scripts/common.sh@353 -- # local d=1 00:05:20.522 15:10:03 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.522 15:10:03 event -- scripts/common.sh@355 -- # echo 1 00:05:20.522 15:10:03 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.522 15:10:03 event -- scripts/common.sh@366 -- # decimal 2 00:05:20.522 15:10:03 event -- scripts/common.sh@353 -- # local d=2 00:05:20.522 15:10:03 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.522 15:10:03 event -- scripts/common.sh@355 -- # echo 2 00:05:20.522 15:10:03 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.522 15:10:03 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.522 15:10:03 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.522 15:10:03 event -- scripts/common.sh@368 -- # return 0 00:05:20.522 15:10:03 event -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.522 15:10:03 event -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:20.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.522 --rc genhtml_branch_coverage=1 00:05:20.522 --rc genhtml_function_coverage=1 00:05:20.522 --rc genhtml_legend=1 00:05:20.522 --rc geninfo_all_blocks=1 00:05:20.522 --rc geninfo_unexecuted_blocks=1 00:05:20.522 00:05:20.522 ' 00:05:20.522 15:10:03 event -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:20.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.522 --rc genhtml_branch_coverage=1 00:05:20.522 --rc genhtml_function_coverage=1 00:05:20.522 --rc genhtml_legend=1 00:05:20.522 --rc geninfo_all_blocks=1 00:05:20.522 --rc geninfo_unexecuted_blocks=1 00:05:20.522 00:05:20.522 ' 00:05:20.522 15:10:03 event -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:20.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.522 --rc genhtml_branch_coverage=1 00:05:20.522 --rc genhtml_function_coverage=1 00:05:20.522 --rc genhtml_legend=1 00:05:20.522 --rc geninfo_all_blocks=1 00:05:20.522 --rc geninfo_unexecuted_blocks=1 00:05:20.522 00:05:20.522 ' 00:05:20.522 15:10:03 event -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:20.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.522 --rc genhtml_branch_coverage=1 00:05:20.522 --rc genhtml_function_coverage=1 00:05:20.522 --rc genhtml_legend=1 00:05:20.522 --rc geninfo_all_blocks=1 00:05:20.522 --rc geninfo_unexecuted_blocks=1 00:05:20.522 00:05:20.522 ' 00:05:20.522 15:10:03 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:20.522 15:10:03 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:20.522 15:10:03 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.522 15:10:03 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:20.522 15:10:03 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.523 15:10:03 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.523 ************************************ 00:05:20.523 START TEST event_perf 00:05:20.523 ************************************ 00:05:20.523 15:10:03 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:20.523 Running I/O for 1 seconds...[2024-10-25 15:10:03.087566] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:05:20.523 [2024-10-25 15:10:03.087799] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58932 ] 00:05:20.782 [2024-10-25 15:10:03.277168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:20.782 [2024-10-25 15:10:03.396418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.782 [2024-10-25 15:10:03.396572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.782 [2024-10-25 15:10:03.396740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.782 [2024-10-25 15:10:03.396783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.220 Running I/O for 1 seconds... 00:05:22.220 lcore 0: 95323 00:05:22.220 lcore 1: 95323 00:05:22.220 lcore 2: 95318 00:05:22.220 lcore 3: 95320 00:05:22.220 done. 00:05:22.220 00:05:22.220 real 0m1.609s 00:05:22.220 user 0m4.330s 00:05:22.220 sys 0m0.154s 00:05:22.220 ************************************ 00:05:22.220 END TEST event_perf 00:05:22.220 15:10:04 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.220 15:10:04 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:22.220 ************************************ 00:05:22.220 15:10:04 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:22.220 15:10:04 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:22.220 15:10:04 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.220 15:10:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.220 ************************************ 00:05:22.220 START TEST event_reactor 00:05:22.220 ************************************ 00:05:22.220 15:10:04 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:22.220 [2024-10-25 15:10:04.769836] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:05:22.220 [2024-10-25 15:10:04.770115] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58971 ] 00:05:22.480 [2024-10-25 15:10:04.952806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.480 [2024-10-25 15:10:05.073908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.857 test_start 00:05:23.857 oneshot 00:05:23.857 tick 100 00:05:23.857 tick 100 00:05:23.857 tick 250 00:05:23.857 tick 100 00:05:23.857 tick 100 00:05:23.857 tick 100 00:05:23.857 tick 250 00:05:23.857 tick 500 00:05:23.857 tick 100 00:05:23.857 tick 100 00:05:23.857 tick 250 00:05:23.857 tick 100 00:05:23.857 tick 100 00:05:23.857 test_end 00:05:23.857 00:05:23.857 real 0m1.585s 00:05:23.857 user 0m1.367s 00:05:23.857 sys 0m0.109s 00:05:23.857 15:10:06 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.857 ************************************ 00:05:23.857 END TEST event_reactor 00:05:23.857 ************************************ 00:05:23.857 15:10:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:23.857 15:10:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.857 15:10:06 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:23.857 15:10:06 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.857 15:10:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.857 ************************************ 00:05:23.857 START TEST event_reactor_perf 00:05:23.857 ************************************ 00:05:23.857 15:10:06 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:23.857 [2024-10-25 15:10:06.431381] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:05:23.857 [2024-10-25 15:10:06.431495] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59008 ] 00:05:24.116 [2024-10-25 15:10:06.614167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.116 [2024-10-25 15:10:06.728355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.495 test_start 00:05:25.496 test_end 00:05:25.496 Performance: 378569 events per second 00:05:25.496 00:05:25.496 real 0m1.575s 00:05:25.496 user 0m1.347s 00:05:25.496 sys 0m0.118s 00:05:25.496 15:10:07 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.496 15:10:07 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:25.496 ************************************ 00:05:25.496 END TEST event_reactor_perf 00:05:25.496 ************************************ 00:05:25.496 15:10:08 event -- event/event.sh@49 -- # uname -s 00:05:25.496 15:10:08 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:25.496 15:10:08 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:25.496 15:10:08 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.496 15:10:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.496 15:10:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.496 ************************************ 00:05:25.496 START TEST event_scheduler 00:05:25.496 ************************************ 00:05:25.496 15:10:08 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:25.496 * Looking for test storage... 00:05:25.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:25.496 15:10:08 event.event_scheduler -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:25.496 15:10:08 event.event_scheduler -- common/autotest_common.sh@1689 -- # lcov --version 00:05:25.496 15:10:08 event.event_scheduler -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:25.756 15:10:08 event.event_scheduler -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.756 15:10:08 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:25.756 15:10:08 event.event_scheduler -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.756 15:10:08 event.event_scheduler -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:25.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.756 --rc genhtml_branch_coverage=1 00:05:25.756 --rc genhtml_function_coverage=1 00:05:25.756 --rc genhtml_legend=1 00:05:25.756 --rc geninfo_all_blocks=1 00:05:25.756 --rc geninfo_unexecuted_blocks=1 00:05:25.756 00:05:25.756 ' 00:05:25.756 15:10:08 event.event_scheduler -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:25.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.756 --rc genhtml_branch_coverage=1 00:05:25.756 --rc genhtml_function_coverage=1 00:05:25.756 --rc genhtml_legend=1 00:05:25.756 --rc geninfo_all_blocks=1 00:05:25.756 --rc geninfo_unexecuted_blocks=1 00:05:25.756 00:05:25.756 ' 00:05:25.756 15:10:08 event.event_scheduler -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:25.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.756 --rc genhtml_branch_coverage=1 00:05:25.756 --rc genhtml_function_coverage=1 00:05:25.756 --rc genhtml_legend=1 00:05:25.756 --rc geninfo_all_blocks=1 00:05:25.756 --rc geninfo_unexecuted_blocks=1 00:05:25.756 00:05:25.756 ' 00:05:25.756 15:10:08 event.event_scheduler -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:25.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.756 --rc genhtml_branch_coverage=1 00:05:25.756 --rc genhtml_function_coverage=1 00:05:25.756 --rc genhtml_legend=1 00:05:25.756 --rc geninfo_all_blocks=1 00:05:25.756 --rc geninfo_unexecuted_blocks=1 00:05:25.756 00:05:25.756 ' 00:05:25.756 15:10:08 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:25.756 15:10:08 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59084 00:05:25.756 15:10:08 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:25.756 15:10:08 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.756 15:10:08 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59084 00:05:25.756 15:10:08 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 59084 ']' 00:05:25.756 15:10:08 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.756 15:10:08 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.756 15:10:08 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.756 15:10:08 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.756 15:10:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.756 [2024-10-25 15:10:08.386784] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:05:25.756 [2024-10-25 15:10:08.387235] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59084 ] 00:05:26.014 [2024-10-25 15:10:08.587173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:26.014 [2024-10-25 15:10:08.740613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.014 [2024-10-25 15:10:08.740756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.014 [2024-10-25 15:10:08.740977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.014 [2024-10-25 15:10:08.740990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.950 15:10:09 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.950 15:10:09 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:26.950 15:10:09 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:26.950 15:10:09 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.950 15:10:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.950 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:26.950 POWER: Cannot set governor of lcore 0 to userspace 00:05:26.950 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:26.950 POWER: Cannot set governor of lcore 0 to performance 00:05:26.951 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:26.951 POWER: Cannot set governor of lcore 0 to userspace 00:05:26.951 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:26.951 POWER: Cannot set governor of lcore 0 to userspace 00:05:26.951 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:26.951 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:26.951 POWER: Unable to set Power Management Environment for lcore 0 00:05:26.951 [2024-10-25 15:10:09.337996] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:26.951 [2024-10-25 15:10:09.338030] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:26.951 [2024-10-25 15:10:09.338044] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:26.951 [2024-10-25 15:10:09.338068] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:26.951 [2024-10-25 15:10:09.338080] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:26.951 [2024-10-25 15:10:09.338093] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:26.951 15:10:09 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.951 15:10:09 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:26.951 15:10:09 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.951 15:10:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.278 [2024-10-25 15:10:09.728482] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:27.278 15:10:09 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.278 15:10:09 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:27.278 15:10:09 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.278 15:10:09 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.278 15:10:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.278 ************************************ 00:05:27.278 START TEST scheduler_create_thread 00:05:27.278 ************************************ 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.278 2 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.278 3 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.278 4 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.278 5 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.278 6 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.278 7 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.278 8 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.278 9 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.278 10 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:27.278 15:10:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.663 15:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.663 15:10:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:28.663 15:10:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:28.663 15:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.663 15:10:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.599 15:10:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.599 15:10:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:29.599 15:10:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.599 15:10:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.166 15:10:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.166 15:10:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:30.166 15:10:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:30.166 15:10:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.166 15:10:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.123 ************************************ 00:05:31.123 END TEST scheduler_create_thread 00:05:31.123 ************************************ 00:05:31.123 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:31.123 00:05:31.123 real 0m3.889s 00:05:31.123 user 0m0.026s 00:05:31.123 sys 0m0.007s 00:05:31.123 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.123 15:10:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.123 15:10:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:31.123 15:10:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59084 00:05:31.123 15:10:13 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 59084 ']' 00:05:31.123 15:10:13 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 59084 00:05:31.123 15:10:13 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:31.123 15:10:13 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.123 15:10:13 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59084 00:05:31.124 killing process with pid 59084 00:05:31.124 15:10:13 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:31.124 15:10:13 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:31.124 15:10:13 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59084' 00:05:31.124 15:10:13 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 59084 00:05:31.124 15:10:13 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 59084 00:05:31.382 [2024-10-25 15:10:14.014066] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:32.759 00:05:32.759 real 0m7.227s 00:05:32.759 user 0m14.856s 00:05:32.759 sys 0m0.676s 00:05:32.759 15:10:15 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.759 15:10:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.759 ************************************ 00:05:32.759 END TEST event_scheduler 00:05:32.759 ************************************ 00:05:32.759 15:10:15 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:32.759 15:10:15 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:32.759 15:10:15 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.759 15:10:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.759 15:10:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.759 ************************************ 00:05:32.759 START TEST app_repeat 00:05:32.759 ************************************ 00:05:32.759 15:10:15 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:32.759 15:10:15 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.759 15:10:15 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.759 15:10:15 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:32.759 15:10:15 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.759 15:10:15 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:32.759 15:10:15 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:32.759 15:10:15 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:32.759 15:10:15 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59212 00:05:32.759 15:10:15 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:32.759 15:10:15 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.759 Process app_repeat pid: 59212 00:05:32.759 15:10:15 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59212' 00:05:32.759 15:10:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:32.759 15:10:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:32.759 spdk_app_start Round 0 00:05:32.759 15:10:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59212 /var/tmp/spdk-nbd.sock 00:05:32.759 15:10:15 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59212 ']' 00:05:32.759 15:10:15 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.759 15:10:15 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.759 15:10:15 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.759 15:10:15 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.759 15:10:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.759 [2024-10-25 15:10:15.419926] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:05:32.759 [2024-10-25 15:10:15.420044] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59212 ] 00:05:33.018 [2024-10-25 15:10:15.603428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.018 [2024-10-25 15:10:15.722983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.018 [2024-10-25 15:10:15.723015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.973 15:10:16 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.973 15:10:16 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:33.973 15:10:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.973 Malloc0 00:05:33.973 15:10:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.235 Malloc1 00:05:34.235 15:10:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.235 15:10:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.235 15:10:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.235 15:10:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:34.235 15:10:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.235 15:10:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:34.235 15:10:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.235 15:10:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.235 15:10:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.235 15:10:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:34.235 15:10:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.235 15:10:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:34.235 15:10:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:34.235 15:10:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:34.235 15:10:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.235 15:10:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:34.495 /dev/nbd0 00:05:34.495 15:10:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:34.495 15:10:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:34.495 15:10:17 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:34.495 15:10:17 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:34.495 15:10:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:34.495 15:10:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:34.495 15:10:17 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:34.495 15:10:17 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:34.496 15:10:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:34.496 15:10:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:34.496 15:10:17 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.496 1+0 records in 00:05:34.496 1+0 records out 00:05:34.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408269 s, 10.0 MB/s 00:05:34.496 15:10:17 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.496 15:10:17 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:34.496 15:10:17 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.496 15:10:17 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:34.496 15:10:17 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:34.496 15:10:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.496 15:10:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.496 15:10:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:34.755 /dev/nbd1 00:05:34.755 15:10:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:34.755 15:10:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:34.755 15:10:17 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:34.755 15:10:17 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:34.755 15:10:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:34.755 15:10:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:34.755 15:10:17 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:34.755 15:10:17 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:34.755 15:10:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:34.755 15:10:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:34.755 15:10:17 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.755 1+0 records in 00:05:34.755 1+0 records out 00:05:34.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387411 s, 10.6 MB/s 00:05:34.755 15:10:17 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.755 15:10:17 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:34.755 15:10:17 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.755 15:10:17 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:34.755 15:10:17 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:34.755 15:10:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.755 15:10:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.755 15:10:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.755 15:10:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.755 15:10:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.015 15:10:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:35.015 { 00:05:35.015 "nbd_device": "/dev/nbd0", 00:05:35.015 "bdev_name": "Malloc0" 00:05:35.015 }, 00:05:35.015 { 00:05:35.015 "nbd_device": "/dev/nbd1", 00:05:35.015 "bdev_name": "Malloc1" 00:05:35.015 } 00:05:35.015 ]' 00:05:35.015 15:10:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:35.015 { 00:05:35.015 "nbd_device": "/dev/nbd0", 00:05:35.015 "bdev_name": "Malloc0" 00:05:35.015 }, 00:05:35.015 { 00:05:35.015 "nbd_device": "/dev/nbd1", 00:05:35.015 "bdev_name": "Malloc1" 00:05:35.015 } 00:05:35.015 ]' 00:05:35.015 15:10:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.015 15:10:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:35.015 /dev/nbd1' 00:05:35.015 15:10:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:35.015 /dev/nbd1' 00:05:35.015 15:10:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.015 15:10:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:35.015 15:10:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:35.015 15:10:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:35.015 15:10:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:35.015 15:10:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:35.015 15:10:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.015 15:10:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.015 15:10:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:35.015 15:10:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.015 15:10:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:35.015 15:10:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:35.275 256+0 records in 00:05:35.275 256+0 records out 00:05:35.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133339 s, 78.6 MB/s 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:35.275 256+0 records in 00:05:35.275 256+0 records out 00:05:35.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0333378 s, 31.5 MB/s 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:35.275 256+0 records in 00:05:35.275 256+0 records out 00:05:35.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0425456 s, 24.6 MB/s 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.275 15:10:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:35.534 15:10:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:35.534 15:10:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:35.534 15:10:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:35.534 15:10:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.534 15:10:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.534 15:10:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:35.534 15:10:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.534 15:10:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.534 15:10:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.534 15:10:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:35.793 15:10:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:35.793 15:10:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:35.793 15:10:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:35.793 15:10:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.793 15:10:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.793 15:10:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:35.793 15:10:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.793 15:10:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.793 15:10:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.793 15:10:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.793 15:10:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.053 15:10:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.053 15:10:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.053 15:10:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.053 15:10:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.053 15:10:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.053 15:10:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.053 15:10:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:36.053 15:10:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.053 15:10:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.053 15:10:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.053 15:10:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.053 15:10:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.053 15:10:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:36.312 15:10:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.690 [2024-10-25 15:10:20.224731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.690 [2024-10-25 15:10:20.344858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.690 [2024-10-25 15:10:20.344862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.987 [2024-10-25 15:10:20.553409] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.987 [2024-10-25 15:10:20.553484] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:39.365 15:10:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:39.365 spdk_app_start Round 1 00:05:39.365 15:10:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:39.365 15:10:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59212 /var/tmp/spdk-nbd.sock 00:05:39.365 15:10:22 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59212 ']' 00:05:39.365 15:10:22 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:39.366 15:10:22 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:39.366 15:10:22 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:39.366 15:10:22 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.366 15:10:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:39.625 15:10:22 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.625 15:10:22 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:39.625 15:10:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.192 Malloc0 00:05:40.192 15:10:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.451 Malloc1 00:05:40.451 15:10:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.451 15:10:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.451 15:10:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.451 15:10:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.451 15:10:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.451 15:10:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.451 15:10:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.451 15:10:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.451 15:10:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.451 15:10:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.451 15:10:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.451 15:10:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.451 15:10:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.451 15:10:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.451 15:10:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.451 15:10:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.711 /dev/nbd0 00:05:40.711 15:10:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.711 15:10:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.711 15:10:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:40.711 15:10:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:40.711 15:10:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:40.711 15:10:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:40.711 15:10:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:40.711 15:10:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:40.711 15:10:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:40.711 15:10:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:40.711 15:10:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.711 1+0 records in 00:05:40.711 1+0 records out 00:05:40.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440914 s, 9.3 MB/s 00:05:40.711 15:10:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.711 15:10:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:40.711 15:10:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.711 15:10:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:40.711 15:10:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:40.711 15:10:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.711 15:10:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.711 15:10:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.971 /dev/nbd1 00:05:40.971 15:10:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.971 15:10:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.971 15:10:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:40.971 15:10:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:40.971 15:10:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:40.971 15:10:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:40.971 15:10:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:40.971 15:10:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:40.971 15:10:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:40.971 15:10:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:40.971 15:10:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.971 1+0 records in 00:05:40.971 1+0 records out 00:05:40.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502017 s, 8.2 MB/s 00:05:40.971 15:10:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.971 15:10:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:40.971 15:10:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.971 15:10:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:40.971 15:10:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:40.971 15:10:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.971 15:10:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.971 15:10:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.971 15:10:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.971 15:10:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.230 15:10:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.230 { 00:05:41.230 "nbd_device": "/dev/nbd0", 00:05:41.230 "bdev_name": "Malloc0" 00:05:41.230 }, 00:05:41.230 { 00:05:41.230 "nbd_device": "/dev/nbd1", 00:05:41.230 "bdev_name": "Malloc1" 00:05:41.230 } 00:05:41.230 ]' 00:05:41.230 15:10:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.230 { 00:05:41.230 "nbd_device": "/dev/nbd0", 00:05:41.230 "bdev_name": "Malloc0" 00:05:41.230 }, 00:05:41.230 { 00:05:41.230 "nbd_device": "/dev/nbd1", 00:05:41.230 "bdev_name": "Malloc1" 00:05:41.230 } 00:05:41.230 ]' 00:05:41.230 15:10:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.230 15:10:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.230 /dev/nbd1' 00:05:41.230 15:10:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.230 /dev/nbd1' 00:05:41.230 15:10:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.230 15:10:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.230 15:10:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.230 15:10:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.230 15:10:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.230 15:10:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.230 15:10:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.230 15:10:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.230 15:10:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.230 15:10:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.230 15:10:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.230 15:10:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.230 256+0 records in 00:05:41.230 256+0 records out 00:05:41.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00569694 s, 184 MB/s 00:05:41.230 15:10:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.230 15:10:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.490 256+0 records in 00:05:41.490 256+0 records out 00:05:41.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0340936 s, 30.8 MB/s 00:05:41.490 15:10:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.490 15:10:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.490 256+0 records in 00:05:41.490 256+0 records out 00:05:41.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0375615 s, 27.9 MB/s 00:05:41.490 15:10:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.490 15:10:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.490 15:10:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.490 15:10:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.490 15:10:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.490 15:10:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.490 15:10:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.490 15:10:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.490 15:10:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.490 15:10:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.490 15:10:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.490 15:10:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.490 15:10:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.490 15:10:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.490 15:10:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.490 15:10:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.490 15:10:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.490 15:10:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.490 15:10:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.749 15:10:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.749 15:10:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.749 15:10:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.749 15:10:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.749 15:10:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.749 15:10:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.749 15:10:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.749 15:10:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.749 15:10:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.749 15:10:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.008 15:10:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.008 15:10:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.008 15:10:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.008 15:10:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.008 15:10:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.008 15:10:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.008 15:10:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.008 15:10:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.008 15:10:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.008 15:10:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.008 15:10:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.267 15:10:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.267 15:10:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.267 15:10:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.267 15:10:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.267 15:10:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.267 15:10:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.267 15:10:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.267 15:10:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.267 15:10:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.267 15:10:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.267 15:10:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.267 15:10:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.267 15:10:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.849 15:10:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:44.227 [2024-10-25 15:10:26.534155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.227 [2024-10-25 15:10:26.659720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.227 [2024-10-25 15:10:26.659737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.227 [2024-10-25 15:10:26.871224] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.227 [2024-10-25 15:10:26.871304] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.633 15:10:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.633 spdk_app_start Round 2 00:05:45.633 15:10:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:45.633 15:10:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59212 /var/tmp/spdk-nbd.sock 00:05:45.633 15:10:28 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59212 ']' 00:05:45.633 15:10:28 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.633 15:10:28 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.633 15:10:28 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.633 15:10:28 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.633 15:10:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.892 15:10:28 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.892 15:10:28 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:45.892 15:10:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.152 Malloc0 00:05:46.152 15:10:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.413 Malloc1 00:05:46.413 15:10:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.413 15:10:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.413 15:10:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.413 15:10:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.413 15:10:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.413 15:10:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.413 15:10:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.413 15:10:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.413 15:10:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.413 15:10:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.413 15:10:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.413 15:10:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.413 15:10:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.413 15:10:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.413 15:10:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.413 15:10:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.672 /dev/nbd0 00:05:46.672 15:10:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.672 15:10:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.672 15:10:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:46.672 15:10:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:46.672 15:10:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:46.672 15:10:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:46.672 15:10:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:46.672 15:10:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:46.672 15:10:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:46.672 15:10:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:46.673 15:10:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.673 1+0 records in 00:05:46.673 1+0 records out 00:05:46.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261035 s, 15.7 MB/s 00:05:46.673 15:10:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.673 15:10:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:46.673 15:10:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.673 15:10:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:46.673 15:10:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:46.673 15:10:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.673 15:10:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.673 15:10:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.932 /dev/nbd1 00:05:46.932 15:10:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.932 15:10:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.932 15:10:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:46.932 15:10:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:46.932 15:10:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:46.932 15:10:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:46.932 15:10:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:46.932 15:10:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:46.932 15:10:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:46.932 15:10:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:46.932 15:10:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.932 1+0 records in 00:05:46.932 1+0 records out 00:05:46.932 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383999 s, 10.7 MB/s 00:05:46.932 15:10:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.932 15:10:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:46.932 15:10:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.932 15:10:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:46.932 15:10:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:46.932 15:10:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.932 15:10:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.932 15:10:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.932 15:10:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.932 15:10:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.191 15:10:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.191 { 00:05:47.191 "nbd_device": "/dev/nbd0", 00:05:47.191 "bdev_name": "Malloc0" 00:05:47.191 }, 00:05:47.191 { 00:05:47.191 "nbd_device": "/dev/nbd1", 00:05:47.191 "bdev_name": "Malloc1" 00:05:47.191 } 00:05:47.191 ]' 00:05:47.191 15:10:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.191 { 00:05:47.191 "nbd_device": "/dev/nbd0", 00:05:47.191 "bdev_name": "Malloc0" 00:05:47.191 }, 00:05:47.191 { 00:05:47.191 "nbd_device": "/dev/nbd1", 00:05:47.191 "bdev_name": "Malloc1" 00:05:47.191 } 00:05:47.191 ]' 00:05:47.191 15:10:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.191 15:10:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.191 /dev/nbd1' 00:05:47.191 15:10:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.191 /dev/nbd1' 00:05:47.191 15:10:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.191 15:10:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.191 15:10:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.191 15:10:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.191 15:10:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.191 15:10:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.191 15:10:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.191 15:10:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.191 15:10:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.191 15:10:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.191 15:10:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.191 15:10:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.191 256+0 records in 00:05:47.191 256+0 records out 00:05:47.192 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128088 s, 81.9 MB/s 00:05:47.192 15:10:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.451 15:10:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.451 256+0 records in 00:05:47.451 256+0 records out 00:05:47.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.033853 s, 31.0 MB/s 00:05:47.451 15:10:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.451 15:10:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.451 256+0 records in 00:05:47.451 256+0 records out 00:05:47.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0349266 s, 30.0 MB/s 00:05:47.451 15:10:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.451 15:10:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.451 15:10:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.451 15:10:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.451 15:10:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.451 15:10:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.451 15:10:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.451 15:10:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.451 15:10:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.451 15:10:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.451 15:10:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.451 15:10:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.451 15:10:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.451 15:10:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.451 15:10:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.451 15:10:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.451 15:10:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:47.451 15:10:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.451 15:10:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.710 15:10:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.710 15:10:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.710 15:10:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.710 15:10:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.710 15:10:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.710 15:10:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.710 15:10:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.710 15:10:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.710 15:10:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.710 15:10:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.969 15:10:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.969 15:10:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.969 15:10:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.969 15:10:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.969 15:10:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.969 15:10:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.969 15:10:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.969 15:10:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.969 15:10:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.969 15:10:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.969 15:10:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.228 15:10:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:48.228 15:10:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:48.228 15:10:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.228 15:10:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:48.228 15:10:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:48.228 15:10:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.228 15:10:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:48.228 15:10:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:48.228 15:10:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:48.228 15:10:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:48.228 15:10:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:48.228 15:10:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:48.228 15:10:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.487 15:10:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:49.866 [2024-10-25 15:10:32.403510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.866 [2024-10-25 15:10:32.522108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.866 [2024-10-25 15:10:32.522109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.126 [2024-10-25 15:10:32.724847] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.126 [2024-10-25 15:10:32.724952] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.549 15:10:34 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59212 /var/tmp/spdk-nbd.sock 00:05:51.549 15:10:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59212 ']' 00:05:51.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.549 15:10:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.549 15:10:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.549 15:10:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.549 15:10:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.549 15:10:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.808 15:10:34 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.808 15:10:34 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:51.808 15:10:34 event.app_repeat -- event/event.sh@39 -- # killprocess 59212 00:05:51.808 15:10:34 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 59212 ']' 00:05:51.808 15:10:34 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 59212 00:05:51.808 15:10:34 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:51.808 15:10:34 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.808 15:10:34 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59212 00:05:51.808 15:10:34 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.808 killing process with pid 59212 00:05:51.808 15:10:34 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.808 15:10:34 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59212' 00:05:51.808 15:10:34 event.app_repeat -- common/autotest_common.sh@969 -- # kill 59212 00:05:51.808 15:10:34 event.app_repeat -- common/autotest_common.sh@974 -- # wait 59212 00:05:53.184 spdk_app_start is called in Round 0. 00:05:53.184 Shutdown signal received, stop current app iteration 00:05:53.184 Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 reinitialization... 00:05:53.184 spdk_app_start is called in Round 1. 00:05:53.184 Shutdown signal received, stop current app iteration 00:05:53.184 Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 reinitialization... 00:05:53.184 spdk_app_start is called in Round 2. 00:05:53.184 Shutdown signal received, stop current app iteration 00:05:53.184 Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 reinitialization... 00:05:53.184 spdk_app_start is called in Round 3. 00:05:53.184 Shutdown signal received, stop current app iteration 00:05:53.184 15:10:35 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:53.184 15:10:35 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:53.184 00:05:53.184 real 0m20.222s 00:05:53.184 user 0m43.286s 00:05:53.184 sys 0m3.257s 00:05:53.184 15:10:35 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.184 ************************************ 00:05:53.184 END TEST app_repeat 00:05:53.184 ************************************ 00:05:53.184 15:10:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.184 15:10:35 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:53.184 15:10:35 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:53.184 15:10:35 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.184 15:10:35 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.184 15:10:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.184 ************************************ 00:05:53.184 START TEST cpu_locks 00:05:53.184 ************************************ 00:05:53.184 15:10:35 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:53.184 * Looking for test storage... 00:05:53.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:53.184 15:10:35 event.cpu_locks -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:05:53.184 15:10:35 event.cpu_locks -- common/autotest_common.sh@1689 -- # lcov --version 00:05:53.184 15:10:35 event.cpu_locks -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:05:53.184 15:10:35 event.cpu_locks -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.184 15:10:35 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:53.184 15:10:35 event.cpu_locks -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.184 15:10:35 event.cpu_locks -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:05:53.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.184 --rc genhtml_branch_coverage=1 00:05:53.184 --rc genhtml_function_coverage=1 00:05:53.184 --rc genhtml_legend=1 00:05:53.184 --rc geninfo_all_blocks=1 00:05:53.184 --rc geninfo_unexecuted_blocks=1 00:05:53.184 00:05:53.184 ' 00:05:53.184 15:10:35 event.cpu_locks -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:05:53.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.184 --rc genhtml_branch_coverage=1 00:05:53.184 --rc genhtml_function_coverage=1 00:05:53.184 --rc genhtml_legend=1 00:05:53.184 --rc geninfo_all_blocks=1 00:05:53.184 --rc geninfo_unexecuted_blocks=1 00:05:53.184 00:05:53.184 ' 00:05:53.184 15:10:35 event.cpu_locks -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:05:53.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.184 --rc genhtml_branch_coverage=1 00:05:53.184 --rc genhtml_function_coverage=1 00:05:53.184 --rc genhtml_legend=1 00:05:53.184 --rc geninfo_all_blocks=1 00:05:53.184 --rc geninfo_unexecuted_blocks=1 00:05:53.184 00:05:53.184 ' 00:05:53.184 15:10:35 event.cpu_locks -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:05:53.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.184 --rc genhtml_branch_coverage=1 00:05:53.184 --rc genhtml_function_coverage=1 00:05:53.184 --rc genhtml_legend=1 00:05:53.184 --rc geninfo_all_blocks=1 00:05:53.184 --rc geninfo_unexecuted_blocks=1 00:05:53.184 00:05:53.184 ' 00:05:53.184 15:10:35 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:53.184 15:10:35 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:53.184 15:10:35 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:53.184 15:10:35 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:53.184 15:10:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.184 15:10:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.184 15:10:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.184 ************************************ 00:05:53.184 START TEST default_locks 00:05:53.184 ************************************ 00:05:53.184 15:10:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:53.184 15:10:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59665 00:05:53.184 15:10:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59665 00:05:53.184 15:10:35 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59665 ']' 00:05:53.184 15:10:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.184 15:10:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.184 15:10:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.184 15:10:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.184 15:10:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.184 15:10:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.443 [2024-10-25 15:10:36.021346] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:05:53.443 [2024-10-25 15:10:36.021498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59665 ] 00:05:53.701 [2024-10-25 15:10:36.213068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.701 [2024-10-25 15:10:36.332951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.661 15:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.661 15:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:54.661 15:10:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59665 00:05:54.661 15:10:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59665 00:05:54.661 15:10:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.229 15:10:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59665 00:05:55.229 15:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 59665 ']' 00:05:55.229 15:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 59665 00:05:55.229 15:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:55.229 15:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.229 15:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59665 00:05:55.229 15:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.229 15:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.229 15:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59665' 00:05:55.229 killing process with pid 59665 00:05:55.229 15:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 59665 00:05:55.229 15:10:37 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 59665 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59665 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59665 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59665 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59665 ']' 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.813 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59665) - No such process 00:05:57.813 ERROR: process (pid: 59665) is no longer running 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.813 15:10:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.814 15:10:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.814 00:05:57.814 real 0m4.361s 00:05:57.814 user 0m4.305s 00:05:57.814 sys 0m0.768s 00:05:57.814 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.814 15:10:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.814 ************************************ 00:05:57.814 END TEST default_locks 00:05:57.814 ************************************ 00:05:57.814 15:10:40 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:57.814 15:10:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.814 15:10:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.814 15:10:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.814 ************************************ 00:05:57.814 START TEST default_locks_via_rpc 00:05:57.814 ************************************ 00:05:57.814 15:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:57.814 15:10:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59747 00:05:57.814 15:10:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.814 15:10:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59747 00:05:57.814 15:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59747 ']' 00:05:57.814 15:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.814 15:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.814 15:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.814 15:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.814 15:10:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.814 [2024-10-25 15:10:40.445257] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:05:57.814 [2024-10-25 15:10:40.445906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59747 ] 00:05:58.072 [2024-10-25 15:10:40.632268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.072 [2024-10-25 15:10:40.749234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.005 15:10:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.005 15:10:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:59.005 15:10:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:59.005 15:10:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.005 15:10:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.005 15:10:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.005 15:10:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:59.005 15:10:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:59.005 15:10:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:59.005 15:10:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:59.005 15:10:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:59.005 15:10:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.005 15:10:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.005 15:10:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.005 15:10:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59747 00:05:59.005 15:10:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59747 00:05:59.005 15:10:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.570 15:10:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59747 00:05:59.570 15:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 59747 ']' 00:05:59.570 15:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 59747 00:05:59.570 15:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:59.570 15:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:59.570 15:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59747 00:05:59.570 15:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:59.570 15:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:59.570 killing process with pid 59747 00:05:59.570 15:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59747' 00:05:59.570 15:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 59747 00:05:59.570 15:10:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 59747 00:06:02.142 00:06:02.142 real 0m4.347s 00:06:02.142 user 0m4.327s 00:06:02.142 sys 0m0.737s 00:06:02.142 15:10:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.142 15:10:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.142 ************************************ 00:06:02.142 END TEST default_locks_via_rpc 00:06:02.142 ************************************ 00:06:02.142 15:10:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:02.142 15:10:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.142 15:10:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.142 15:10:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.142 ************************************ 00:06:02.142 START TEST non_locking_app_on_locked_coremask 00:06:02.142 ************************************ 00:06:02.142 15:10:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:02.142 15:10:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59821 00:06:02.142 15:10:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.142 15:10:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59821 /var/tmp/spdk.sock 00:06:02.142 15:10:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59821 ']' 00:06:02.142 15:10:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.142 15:10:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.142 15:10:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.142 15:10:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.142 15:10:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.400 [2024-10-25 15:10:44.882362] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:06:02.400 [2024-10-25 15:10:44.882513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59821 ] 00:06:02.400 [2024-10-25 15:10:45.071775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.659 [2024-10-25 15:10:45.188472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.594 15:10:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.595 15:10:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:03.595 15:10:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59844 00:06:03.595 15:10:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59844 /var/tmp/spdk2.sock 00:06:03.595 15:10:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:03.595 15:10:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59844 ']' 00:06:03.595 15:10:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.595 15:10:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.595 15:10:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.595 15:10:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.595 15:10:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.595 [2024-10-25 15:10:46.174747] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:06:03.595 [2024-10-25 15:10:46.175267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59844 ] 00:06:03.854 [2024-10-25 15:10:46.359849] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.854 [2024-10-25 15:10:46.359920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.114 [2024-10-25 15:10:46.614042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.647 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.647 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:06.647 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59821 00:06:06.647 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59821 00:06:06.647 15:10:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.212 15:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59821 00:06:07.212 15:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59821 ']' 00:06:07.212 15:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59821 00:06:07.212 15:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:07.212 15:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.212 15:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59821 00:06:07.212 15:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.212 15:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.212 15:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59821' 00:06:07.212 killing process with pid 59821 00:06:07.212 15:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59821 00:06:07.212 15:10:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59821 00:06:12.479 15:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59844 00:06:12.479 15:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59844 ']' 00:06:12.479 15:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59844 00:06:12.479 15:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:12.479 15:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.479 15:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59844 00:06:12.479 15:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:12.479 15:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:12.479 killing process with pid 59844 00:06:12.479 15:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59844' 00:06:12.479 15:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59844 00:06:12.479 15:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59844 00:06:14.383 00:06:14.384 real 0m12.153s 00:06:14.384 user 0m12.496s 00:06:14.384 sys 0m1.473s 00:06:14.384 15:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.384 15:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.384 ************************************ 00:06:14.384 END TEST non_locking_app_on_locked_coremask 00:06:14.384 ************************************ 00:06:14.384 15:10:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:14.384 15:10:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.384 15:10:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.384 15:10:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.384 ************************************ 00:06:14.384 START TEST locking_app_on_unlocked_coremask 00:06:14.384 ************************************ 00:06:14.384 15:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:14.384 15:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59994 00:06:14.384 15:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59994 /var/tmp/spdk.sock 00:06:14.384 15:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:14.384 15:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59994 ']' 00:06:14.384 15:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.384 15:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.384 15:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.384 15:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.384 15:10:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.384 [2024-10-25 15:10:57.088946] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:06:14.384 [2024-10-25 15:10:57.089075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59994 ] 00:06:14.642 [2024-10-25 15:10:57.259481] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:14.642 [2024-10-25 15:10:57.259538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.642 [2024-10-25 15:10:57.367191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.584 15:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.584 15:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:15.584 15:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60010 00:06:15.584 15:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60010 /var/tmp/spdk2.sock 00:06:15.584 15:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:15.584 15:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60010 ']' 00:06:15.584 15:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.584 15:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.584 15:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.584 15:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.584 15:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.843 [2024-10-25 15:10:58.321043] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:06:15.843 [2024-10-25 15:10:58.321714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60010 ] 00:06:15.843 [2024-10-25 15:10:58.526276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.101 [2024-10-25 15:10:58.766760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.631 15:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.632 15:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:18.632 15:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60010 00:06:18.632 15:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60010 00:06:18.632 15:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.032 15:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59994 00:06:19.032 15:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59994 ']' 00:06:19.032 15:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59994 00:06:19.032 15:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:19.032 15:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.032 15:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59994 00:06:19.290 15:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.290 15:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.290 15:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59994' 00:06:19.290 killing process with pid 59994 00:06:19.290 15:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59994 00:06:19.290 15:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59994 00:06:24.554 15:11:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60010 00:06:24.554 15:11:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60010 ']' 00:06:24.554 15:11:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60010 00:06:24.554 15:11:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:24.554 15:11:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.554 15:11:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60010 00:06:24.554 killing process with pid 60010 00:06:24.554 15:11:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.554 15:11:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.554 15:11:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60010' 00:06:24.554 15:11:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60010 00:06:24.554 15:11:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60010 00:06:26.454 00:06:26.454 real 0m11.940s 00:06:26.454 user 0m12.281s 00:06:26.454 sys 0m1.434s 00:06:26.454 15:11:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.454 ************************************ 00:06:26.454 END TEST locking_app_on_unlocked_coremask 00:06:26.454 ************************************ 00:06:26.454 15:11:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.454 15:11:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:26.454 15:11:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.454 15:11:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.454 15:11:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.454 ************************************ 00:06:26.454 START TEST locking_app_on_locked_coremask 00:06:26.454 ************************************ 00:06:26.454 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:26.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.454 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60171 00:06:26.454 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60171 /var/tmp/spdk.sock 00:06:26.454 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60171 ']' 00:06:26.454 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.454 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.454 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.454 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.454 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.454 15:11:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.454 [2024-10-25 15:11:09.099967] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:06:26.454 [2024-10-25 15:11:09.100093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60171 ] 00:06:26.712 [2024-10-25 15:11:09.283069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.712 [2024-10-25 15:11:09.396631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.649 15:11:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.649 15:11:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:27.649 15:11:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:27.649 15:11:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60187 00:06:27.649 15:11:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60187 /var/tmp/spdk2.sock 00:06:27.649 15:11:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:27.649 15:11:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60187 /var/tmp/spdk2.sock 00:06:27.649 15:11:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:27.649 15:11:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.649 15:11:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:27.649 15:11:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:27.649 15:11:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60187 /var/tmp/spdk2.sock 00:06:27.649 15:11:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60187 ']' 00:06:27.649 15:11:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.649 15:11:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.649 15:11:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.649 15:11:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.649 15:11:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.909 [2024-10-25 15:11:10.400404] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:06:27.909 [2024-10-25 15:11:10.400527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60187 ] 00:06:27.909 [2024-10-25 15:11:10.585162] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60171 has claimed it. 00:06:27.909 [2024-10-25 15:11:10.585236] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:28.477 ERROR: process (pid: 60187) is no longer running 00:06:28.477 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60187) - No such process 00:06:28.477 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.477 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:28.477 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:28.477 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:28.477 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:28.477 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:28.477 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60171 00:06:28.478 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.478 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60171 00:06:29.045 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60171 00:06:29.045 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60171 ']' 00:06:29.045 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60171 00:06:29.045 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:29.045 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.045 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60171 00:06:29.045 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.045 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.045 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60171' 00:06:29.045 killing process with pid 60171 00:06:29.045 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60171 00:06:29.045 15:11:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60171 00:06:31.583 00:06:31.583 real 0m4.932s 00:06:31.583 user 0m5.119s 00:06:31.583 sys 0m0.825s 00:06:31.583 15:11:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.583 ************************************ 00:06:31.583 END TEST locking_app_on_locked_coremask 00:06:31.583 ************************************ 00:06:31.583 15:11:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.583 15:11:13 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:31.583 15:11:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:31.583 15:11:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:31.583 15:11:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.583 ************************************ 00:06:31.583 START TEST locking_overlapped_coremask 00:06:31.583 ************************************ 00:06:31.583 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:31.583 15:11:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60257 00:06:31.583 15:11:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:31.583 15:11:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60257 /var/tmp/spdk.sock 00:06:31.583 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60257 ']' 00:06:31.583 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.583 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.583 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.583 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.583 15:11:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.583 [2024-10-25 15:11:14.117497] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:06:31.583 [2024-10-25 15:11:14.119352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60257 ] 00:06:31.583 [2024-10-25 15:11:14.308497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.870 [2024-10-25 15:11:14.432856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.870 [2024-10-25 15:11:14.432909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.870 [2024-10-25 15:11:14.432925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.808 15:11:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.808 15:11:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:32.808 15:11:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60279 00:06:32.808 15:11:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:32.808 15:11:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60279 /var/tmp/spdk2.sock 00:06:32.808 15:11:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:32.808 15:11:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60279 /var/tmp/spdk2.sock 00:06:32.808 15:11:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:32.808 15:11:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.808 15:11:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:32.808 15:11:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.808 15:11:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60279 /var/tmp/spdk2.sock 00:06:32.808 15:11:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60279 ']' 00:06:32.808 15:11:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.808 15:11:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.808 15:11:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.808 15:11:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.808 15:11:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.808 [2024-10-25 15:11:15.442489] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:06:32.808 [2024-10-25 15:11:15.442843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60279 ] 00:06:33.067 [2024-10-25 15:11:15.635478] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60257 has claimed it. 00:06:33.067 [2024-10-25 15:11:15.635591] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:33.326 ERROR: process (pid: 60279) is no longer running 00:06:33.326 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60279) - No such process 00:06:33.326 15:11:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.326 15:11:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:33.326 15:11:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:33.326 15:11:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:33.326 15:11:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:33.326 15:11:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:33.326 15:11:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:33.326 15:11:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:33.326 15:11:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:33.326 15:11:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:33.326 15:11:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60257 00:06:33.326 15:11:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 60257 ']' 00:06:33.326 15:11:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 60257 00:06:33.326 15:11:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:33.326 15:11:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:33.585 15:11:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60257 00:06:33.585 15:11:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:33.585 15:11:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:33.585 15:11:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60257' 00:06:33.585 killing process with pid 60257 00:06:33.585 15:11:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 60257 00:06:33.585 15:11:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 60257 00:06:36.123 00:06:36.123 real 0m4.541s 00:06:36.123 user 0m12.161s 00:06:36.123 sys 0m0.681s 00:06:36.123 ************************************ 00:06:36.123 END TEST locking_overlapped_coremask 00:06:36.123 ************************************ 00:06:36.123 15:11:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.123 15:11:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.123 15:11:18 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:36.123 15:11:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.123 15:11:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.123 15:11:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.123 ************************************ 00:06:36.123 START TEST locking_overlapped_coremask_via_rpc 00:06:36.123 ************************************ 00:06:36.123 15:11:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:36.123 15:11:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60344 00:06:36.123 15:11:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:36.123 15:11:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60344 /var/tmp/spdk.sock 00:06:36.123 15:11:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60344 ']' 00:06:36.123 15:11:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.123 15:11:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.123 15:11:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.123 15:11:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.123 15:11:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.123 [2024-10-25 15:11:18.731625] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:06:36.124 [2024-10-25 15:11:18.731781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60344 ] 00:06:36.382 [2024-10-25 15:11:18.920685] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.382 [2024-10-25 15:11:18.920965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.382 [2024-10-25 15:11:19.044648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.382 [2024-10-25 15:11:19.044760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.382 [2024-10-25 15:11:19.044798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.345 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.345 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:37.345 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:37.345 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60362 00:06:37.345 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60362 /var/tmp/spdk2.sock 00:06:37.345 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60362 ']' 00:06:37.346 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.346 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.346 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.346 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.346 15:11:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.346 [2024-10-25 15:11:20.027265] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:06:37.346 [2024-10-25 15:11:20.027631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60362 ] 00:06:37.603 [2024-10-25 15:11:20.214500] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.603 [2024-10-25 15:11:20.214575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.862 [2024-10-25 15:11:20.518440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.862 [2024-10-25 15:11:20.522361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.862 [2024-10-25 15:11:20.522405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.399 [2024-10-25 15:11:22.733500] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60344 has claimed it. 00:06:40.399 request: 00:06:40.399 { 00:06:40.399 "method": "framework_enable_cpumask_locks", 00:06:40.399 "req_id": 1 00:06:40.399 } 00:06:40.399 Got JSON-RPC error response 00:06:40.399 response: 00:06:40.399 { 00:06:40.399 "code": -32603, 00:06:40.399 "message": "Failed to claim CPU core: 2" 00:06:40.399 } 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60344 /var/tmp/spdk.sock 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60344 ']' 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:40.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60362 /var/tmp/spdk2.sock 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60362 ']' 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.399 15:11:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.659 15:11:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.659 15:11:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:40.659 15:11:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:40.659 15:11:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:40.659 15:11:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:40.659 15:11:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:40.659 00:06:40.659 real 0m4.576s 00:06:40.659 user 0m1.377s 00:06:40.659 sys 0m0.253s 00:06:40.659 15:11:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.659 15:11:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.659 ************************************ 00:06:40.659 END TEST locking_overlapped_coremask_via_rpc 00:06:40.659 ************************************ 00:06:40.659 15:11:23 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:40.659 15:11:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60344 ]] 00:06:40.659 15:11:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60344 00:06:40.659 15:11:23 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60344 ']' 00:06:40.659 15:11:23 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60344 00:06:40.659 15:11:23 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:40.659 15:11:23 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.659 15:11:23 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60344 00:06:40.659 killing process with pid 60344 00:06:40.659 15:11:23 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.659 15:11:23 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.659 15:11:23 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60344' 00:06:40.659 15:11:23 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60344 00:06:40.659 15:11:23 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60344 00:06:43.271 15:11:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60362 ]] 00:06:43.271 15:11:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60362 00:06:43.271 15:11:25 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60362 ']' 00:06:43.271 15:11:25 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60362 00:06:43.271 15:11:25 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:43.271 15:11:25 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.271 15:11:25 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60362 00:06:43.271 killing process with pid 60362 00:06:43.271 15:11:25 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:43.271 15:11:25 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:43.271 15:11:25 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60362' 00:06:43.271 15:11:25 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60362 00:06:43.271 15:11:25 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60362 00:06:45.806 15:11:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:45.806 15:11:28 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:45.806 15:11:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60344 ]] 00:06:45.806 15:11:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60344 00:06:45.806 15:11:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60344 ']' 00:06:45.806 Process with pid 60344 is not found 00:06:45.806 15:11:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60344 00:06:45.806 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60344) - No such process 00:06:45.806 15:11:28 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60344 is not found' 00:06:45.806 15:11:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60362 ]] 00:06:45.806 15:11:28 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60362 00:06:45.806 15:11:28 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60362 ']' 00:06:45.806 Process with pid 60362 is not found 00:06:45.806 15:11:28 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60362 00:06:45.806 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60362) - No such process 00:06:45.806 15:11:28 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60362 is not found' 00:06:45.806 15:11:28 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:45.806 00:06:45.806 real 0m52.784s 00:06:45.806 user 1m29.305s 00:06:45.806 sys 0m7.668s 00:06:45.806 15:11:28 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.806 ************************************ 00:06:45.806 END TEST cpu_locks 00:06:45.806 ************************************ 00:06:45.806 15:11:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.806 ************************************ 00:06:45.806 END TEST event 00:06:45.806 ************************************ 00:06:45.806 00:06:45.806 real 1m25.694s 00:06:45.806 user 2m34.737s 00:06:45.806 sys 0m12.427s 00:06:45.806 15:11:28 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.806 15:11:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.065 15:11:28 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:46.065 15:11:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.065 15:11:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.065 15:11:28 -- common/autotest_common.sh@10 -- # set +x 00:06:46.065 ************************************ 00:06:46.065 START TEST thread 00:06:46.065 ************************************ 00:06:46.065 15:11:28 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:46.065 * Looking for test storage... 00:06:46.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:46.065 15:11:28 thread -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:46.065 15:11:28 thread -- common/autotest_common.sh@1689 -- # lcov --version 00:06:46.065 15:11:28 thread -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:46.065 15:11:28 thread -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:46.065 15:11:28 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.065 15:11:28 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.065 15:11:28 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.065 15:11:28 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.065 15:11:28 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.065 15:11:28 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.065 15:11:28 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.065 15:11:28 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.065 15:11:28 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.065 15:11:28 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.065 15:11:28 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.065 15:11:28 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:46.065 15:11:28 thread -- scripts/common.sh@345 -- # : 1 00:06:46.065 15:11:28 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.065 15:11:28 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.065 15:11:28 thread -- scripts/common.sh@365 -- # decimal 1 00:06:46.065 15:11:28 thread -- scripts/common.sh@353 -- # local d=1 00:06:46.066 15:11:28 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.066 15:11:28 thread -- scripts/common.sh@355 -- # echo 1 00:06:46.066 15:11:28 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.325 15:11:28 thread -- scripts/common.sh@366 -- # decimal 2 00:06:46.325 15:11:28 thread -- scripts/common.sh@353 -- # local d=2 00:06:46.325 15:11:28 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.325 15:11:28 thread -- scripts/common.sh@355 -- # echo 2 00:06:46.325 15:11:28 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.325 15:11:28 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.325 15:11:28 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.325 15:11:28 thread -- scripts/common.sh@368 -- # return 0 00:06:46.325 15:11:28 thread -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.325 15:11:28 thread -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:46.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.325 --rc genhtml_branch_coverage=1 00:06:46.325 --rc genhtml_function_coverage=1 00:06:46.325 --rc genhtml_legend=1 00:06:46.325 --rc geninfo_all_blocks=1 00:06:46.325 --rc geninfo_unexecuted_blocks=1 00:06:46.325 00:06:46.325 ' 00:06:46.325 15:11:28 thread -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:46.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.325 --rc genhtml_branch_coverage=1 00:06:46.325 --rc genhtml_function_coverage=1 00:06:46.325 --rc genhtml_legend=1 00:06:46.325 --rc geninfo_all_blocks=1 00:06:46.325 --rc geninfo_unexecuted_blocks=1 00:06:46.325 00:06:46.325 ' 00:06:46.325 15:11:28 thread -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:46.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.325 --rc genhtml_branch_coverage=1 00:06:46.325 --rc genhtml_function_coverage=1 00:06:46.325 --rc genhtml_legend=1 00:06:46.325 --rc geninfo_all_blocks=1 00:06:46.325 --rc geninfo_unexecuted_blocks=1 00:06:46.325 00:06:46.325 ' 00:06:46.325 15:11:28 thread -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:46.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.325 --rc genhtml_branch_coverage=1 00:06:46.325 --rc genhtml_function_coverage=1 00:06:46.325 --rc genhtml_legend=1 00:06:46.325 --rc geninfo_all_blocks=1 00:06:46.325 --rc geninfo_unexecuted_blocks=1 00:06:46.325 00:06:46.325 ' 00:06:46.325 15:11:28 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:46.325 15:11:28 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:46.325 15:11:28 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.325 15:11:28 thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.325 ************************************ 00:06:46.325 START TEST thread_poller_perf 00:06:46.325 ************************************ 00:06:46.325 15:11:28 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:46.325 [2024-10-25 15:11:28.865222] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:06:46.325 [2024-10-25 15:11:28.865517] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60563 ] 00:06:46.325 [2024-10-25 15:11:29.045899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.585 [2024-10-25 15:11:29.184942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.585 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:47.962 [2024-10-25T15:11:30.690Z] ====================================== 00:06:47.962 [2024-10-25T15:11:30.690Z] busy:2500233060 (cyc) 00:06:47.962 [2024-10-25T15:11:30.690Z] total_run_count: 385000 00:06:47.962 [2024-10-25T15:11:30.690Z] tsc_hz: 2490000000 (cyc) 00:06:47.962 [2024-10-25T15:11:30.690Z] ====================================== 00:06:47.962 [2024-10-25T15:11:30.690Z] poller_cost: 6494 (cyc), 2608 (nsec) 00:06:47.962 00:06:47.962 ************************************ 00:06:47.962 END TEST thread_poller_perf 00:06:47.962 ************************************ 00:06:47.962 real 0m1.605s 00:06:47.962 user 0m1.369s 00:06:47.962 sys 0m0.126s 00:06:47.962 15:11:30 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.962 15:11:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.962 15:11:30 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:47.962 15:11:30 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:47.962 15:11:30 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.962 15:11:30 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.962 ************************************ 00:06:47.962 START TEST thread_poller_perf 00:06:47.962 ************************************ 00:06:47.962 15:11:30 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:47.962 [2024-10-25 15:11:30.549886] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:06:47.962 [2024-10-25 15:11:30.550006] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60605 ] 00:06:48.228 [2024-10-25 15:11:30.733318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.228 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:48.228 [2024-10-25 15:11:30.849820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.705 [2024-10-25T15:11:32.433Z] ====================================== 00:06:49.705 [2024-10-25T15:11:32.433Z] busy:2494098790 (cyc) 00:06:49.705 [2024-10-25T15:11:32.433Z] total_run_count: 5071000 00:06:49.705 [2024-10-25T15:11:32.433Z] tsc_hz: 2490000000 (cyc) 00:06:49.705 [2024-10-25T15:11:32.433Z] ====================================== 00:06:49.705 [2024-10-25T15:11:32.433Z] poller_cost: 491 (cyc), 197 (nsec) 00:06:49.705 00:06:49.705 real 0m1.582s 00:06:49.705 user 0m1.363s 00:06:49.705 sys 0m0.110s 00:06:49.705 15:11:32 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.705 ************************************ 00:06:49.705 END TEST thread_poller_perf 00:06:49.705 ************************************ 00:06:49.705 15:11:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:49.705 15:11:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:49.705 00:06:49.705 real 0m3.578s 00:06:49.705 user 0m2.907s 00:06:49.705 sys 0m0.462s 00:06:49.705 15:11:32 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.705 15:11:32 thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.705 ************************************ 00:06:49.705 END TEST thread 00:06:49.705 ************************************ 00:06:49.705 15:11:32 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:49.705 15:11:32 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:49.705 15:11:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.705 15:11:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.705 15:11:32 -- common/autotest_common.sh@10 -- # set +x 00:06:49.705 ************************************ 00:06:49.705 START TEST app_cmdline 00:06:49.705 ************************************ 00:06:49.705 15:11:32 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:49.705 * Looking for test storage... 00:06:49.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:49.705 15:11:32 app_cmdline -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:49.705 15:11:32 app_cmdline -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:49.705 15:11:32 app_cmdline -- common/autotest_common.sh@1689 -- # lcov --version 00:06:49.978 15:11:32 app_cmdline -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.978 15:11:32 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:49.978 15:11:32 app_cmdline -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.978 15:11:32 app_cmdline -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:49.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.978 --rc genhtml_branch_coverage=1 00:06:49.978 --rc genhtml_function_coverage=1 00:06:49.978 --rc genhtml_legend=1 00:06:49.978 --rc geninfo_all_blocks=1 00:06:49.978 --rc geninfo_unexecuted_blocks=1 00:06:49.978 00:06:49.978 ' 00:06:49.978 15:11:32 app_cmdline -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:49.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.978 --rc genhtml_branch_coverage=1 00:06:49.978 --rc genhtml_function_coverage=1 00:06:49.978 --rc genhtml_legend=1 00:06:49.978 --rc geninfo_all_blocks=1 00:06:49.978 --rc geninfo_unexecuted_blocks=1 00:06:49.978 00:06:49.978 ' 00:06:49.978 15:11:32 app_cmdline -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:49.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.978 --rc genhtml_branch_coverage=1 00:06:49.978 --rc genhtml_function_coverage=1 00:06:49.978 --rc genhtml_legend=1 00:06:49.978 --rc geninfo_all_blocks=1 00:06:49.978 --rc geninfo_unexecuted_blocks=1 00:06:49.978 00:06:49.978 ' 00:06:49.978 15:11:32 app_cmdline -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:49.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.978 --rc genhtml_branch_coverage=1 00:06:49.978 --rc genhtml_function_coverage=1 00:06:49.978 --rc genhtml_legend=1 00:06:49.978 --rc geninfo_all_blocks=1 00:06:49.978 --rc geninfo_unexecuted_blocks=1 00:06:49.978 00:06:49.978 ' 00:06:49.978 15:11:32 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:49.978 15:11:32 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60694 00:06:49.978 15:11:32 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60694 00:06:49.978 15:11:32 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:49.978 15:11:32 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 60694 ']' 00:06:49.978 15:11:32 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.978 15:11:32 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.978 15:11:32 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.978 15:11:32 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.978 15:11:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.978 [2024-10-25 15:11:32.570420] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:06:49.978 [2024-10-25 15:11:32.570748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60694 ] 00:06:50.236 [2024-10-25 15:11:32.754630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.236 [2024-10-25 15:11:32.876023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.171 15:11:33 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.171 15:11:33 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:51.171 15:11:33 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:51.430 { 00:06:51.430 "version": "SPDK v25.01-pre git sha1 183001ebc", 00:06:51.430 "fields": { 00:06:51.430 "major": 25, 00:06:51.430 "minor": 1, 00:06:51.430 "patch": 0, 00:06:51.430 "suffix": "-pre", 00:06:51.430 "commit": "183001ebc" 00:06:51.430 } 00:06:51.430 } 00:06:51.430 15:11:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:51.430 15:11:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:51.430 15:11:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:51.430 15:11:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:51.430 15:11:34 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:51.430 15:11:34 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:51.430 15:11:34 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.430 15:11:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:51.430 15:11:34 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:51.430 15:11:34 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.430 15:11:34 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:51.430 15:11:34 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:51.430 15:11:34 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.430 15:11:34 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:51.431 15:11:34 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.431 15:11:34 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:51.431 15:11:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.431 15:11:34 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:51.431 15:11:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.431 15:11:34 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:51.431 15:11:34 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.431 15:11:34 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:51.431 15:11:34 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:51.431 15:11:34 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.690 request: 00:06:51.690 { 00:06:51.690 "method": "env_dpdk_get_mem_stats", 00:06:51.690 "req_id": 1 00:06:51.690 } 00:06:51.690 Got JSON-RPC error response 00:06:51.690 response: 00:06:51.690 { 00:06:51.690 "code": -32601, 00:06:51.690 "message": "Method not found" 00:06:51.690 } 00:06:51.690 15:11:34 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:51.690 15:11:34 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:51.690 15:11:34 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:51.690 15:11:34 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:51.690 15:11:34 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60694 00:06:51.690 15:11:34 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 60694 ']' 00:06:51.690 15:11:34 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 60694 00:06:51.690 15:11:34 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:51.690 15:11:34 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.690 15:11:34 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60694 00:06:51.690 killing process with pid 60694 00:06:51.690 15:11:34 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.690 15:11:34 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.690 15:11:34 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60694' 00:06:51.690 15:11:34 app_cmdline -- common/autotest_common.sh@969 -- # kill 60694 00:06:51.690 15:11:34 app_cmdline -- common/autotest_common.sh@974 -- # wait 60694 00:06:54.242 00:06:54.242 real 0m4.500s 00:06:54.242 user 0m4.675s 00:06:54.242 sys 0m0.681s 00:06:54.242 15:11:36 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.242 ************************************ 00:06:54.242 END TEST app_cmdline 00:06:54.242 ************************************ 00:06:54.242 15:11:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:54.242 15:11:36 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:54.242 15:11:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.242 15:11:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.242 15:11:36 -- common/autotest_common.sh@10 -- # set +x 00:06:54.242 ************************************ 00:06:54.242 START TEST version 00:06:54.242 ************************************ 00:06:54.242 15:11:36 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:54.242 * Looking for test storage... 00:06:54.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:54.242 15:11:36 version -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:54.242 15:11:36 version -- common/autotest_common.sh@1689 -- # lcov --version 00:06:54.242 15:11:36 version -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:54.502 15:11:36 version -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:54.502 15:11:36 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.502 15:11:36 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.502 15:11:36 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.502 15:11:36 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.502 15:11:36 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.502 15:11:36 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.502 15:11:36 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.502 15:11:36 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.502 15:11:36 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.502 15:11:36 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.502 15:11:36 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.502 15:11:36 version -- scripts/common.sh@344 -- # case "$op" in 00:06:54.502 15:11:36 version -- scripts/common.sh@345 -- # : 1 00:06:54.502 15:11:36 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.502 15:11:36 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.502 15:11:36 version -- scripts/common.sh@365 -- # decimal 1 00:06:54.502 15:11:36 version -- scripts/common.sh@353 -- # local d=1 00:06:54.502 15:11:36 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.502 15:11:36 version -- scripts/common.sh@355 -- # echo 1 00:06:54.502 15:11:36 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.502 15:11:36 version -- scripts/common.sh@366 -- # decimal 2 00:06:54.502 15:11:37 version -- scripts/common.sh@353 -- # local d=2 00:06:54.502 15:11:37 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.502 15:11:37 version -- scripts/common.sh@355 -- # echo 2 00:06:54.502 15:11:37 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.502 15:11:37 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.502 15:11:37 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.502 15:11:37 version -- scripts/common.sh@368 -- # return 0 00:06:54.502 15:11:37 version -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.502 15:11:37 version -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:54.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.502 --rc genhtml_branch_coverage=1 00:06:54.502 --rc genhtml_function_coverage=1 00:06:54.502 --rc genhtml_legend=1 00:06:54.502 --rc geninfo_all_blocks=1 00:06:54.502 --rc geninfo_unexecuted_blocks=1 00:06:54.502 00:06:54.502 ' 00:06:54.502 15:11:37 version -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:54.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.502 --rc genhtml_branch_coverage=1 00:06:54.502 --rc genhtml_function_coverage=1 00:06:54.502 --rc genhtml_legend=1 00:06:54.502 --rc geninfo_all_blocks=1 00:06:54.502 --rc geninfo_unexecuted_blocks=1 00:06:54.502 00:06:54.502 ' 00:06:54.502 15:11:37 version -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:54.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.502 --rc genhtml_branch_coverage=1 00:06:54.502 --rc genhtml_function_coverage=1 00:06:54.502 --rc genhtml_legend=1 00:06:54.502 --rc geninfo_all_blocks=1 00:06:54.502 --rc geninfo_unexecuted_blocks=1 00:06:54.502 00:06:54.502 ' 00:06:54.502 15:11:37 version -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:54.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.502 --rc genhtml_branch_coverage=1 00:06:54.502 --rc genhtml_function_coverage=1 00:06:54.502 --rc genhtml_legend=1 00:06:54.502 --rc geninfo_all_blocks=1 00:06:54.502 --rc geninfo_unexecuted_blocks=1 00:06:54.502 00:06:54.502 ' 00:06:54.502 15:11:37 version -- app/version.sh@17 -- # get_header_version major 00:06:54.502 15:11:37 version -- app/version.sh@14 -- # cut -f2 00:06:54.502 15:11:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:54.502 15:11:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:54.502 15:11:37 version -- app/version.sh@17 -- # major=25 00:06:54.502 15:11:37 version -- app/version.sh@18 -- # get_header_version minor 00:06:54.502 15:11:37 version -- app/version.sh@14 -- # cut -f2 00:06:54.502 15:11:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:54.502 15:11:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:54.502 15:11:37 version -- app/version.sh@18 -- # minor=1 00:06:54.502 15:11:37 version -- app/version.sh@19 -- # get_header_version patch 00:06:54.503 15:11:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:54.503 15:11:37 version -- app/version.sh@14 -- # cut -f2 00:06:54.503 15:11:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:54.503 15:11:37 version -- app/version.sh@19 -- # patch=0 00:06:54.503 15:11:37 version -- app/version.sh@20 -- # get_header_version suffix 00:06:54.503 15:11:37 version -- app/version.sh@14 -- # cut -f2 00:06:54.503 15:11:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:54.503 15:11:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:54.503 15:11:37 version -- app/version.sh@20 -- # suffix=-pre 00:06:54.503 15:11:37 version -- app/version.sh@22 -- # version=25.1 00:06:54.503 15:11:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:54.503 15:11:37 version -- app/version.sh@28 -- # version=25.1rc0 00:06:54.503 15:11:37 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:54.503 15:11:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:54.503 15:11:37 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:54.503 15:11:37 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:54.503 ************************************ 00:06:54.503 END TEST version 00:06:54.503 ************************************ 00:06:54.503 00:06:54.503 real 0m0.322s 00:06:54.503 user 0m0.178s 00:06:54.503 sys 0m0.201s 00:06:54.503 15:11:37 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.503 15:11:37 version -- common/autotest_common.sh@10 -- # set +x 00:06:54.503 15:11:37 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:54.503 15:11:37 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:54.503 15:11:37 -- spdk/autotest.sh@194 -- # uname -s 00:06:54.503 15:11:37 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:54.503 15:11:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:54.503 15:11:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:54.503 15:11:37 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:54.503 15:11:37 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:54.503 15:11:37 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:54.503 15:11:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.503 15:11:37 -- common/autotest_common.sh@10 -- # set +x 00:06:54.503 ************************************ 00:06:54.503 START TEST blockdev_nvme 00:06:54.503 ************************************ 00:06:54.503 15:11:37 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:54.763 * Looking for test storage... 00:06:54.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:54.763 15:11:37 blockdev_nvme -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:54.763 15:11:37 blockdev_nvme -- common/autotest_common.sh@1689 -- # lcov --version 00:06:54.763 15:11:37 blockdev_nvme -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:54.763 15:11:37 blockdev_nvme -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.763 15:11:37 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:54.763 15:11:37 blockdev_nvme -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.763 15:11:37 blockdev_nvme -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:54.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.763 --rc genhtml_branch_coverage=1 00:06:54.763 --rc genhtml_function_coverage=1 00:06:54.763 --rc genhtml_legend=1 00:06:54.763 --rc geninfo_all_blocks=1 00:06:54.763 --rc geninfo_unexecuted_blocks=1 00:06:54.763 00:06:54.763 ' 00:06:54.763 15:11:37 blockdev_nvme -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:54.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.763 --rc genhtml_branch_coverage=1 00:06:54.763 --rc genhtml_function_coverage=1 00:06:54.763 --rc genhtml_legend=1 00:06:54.763 --rc geninfo_all_blocks=1 00:06:54.763 --rc geninfo_unexecuted_blocks=1 00:06:54.763 00:06:54.763 ' 00:06:54.763 15:11:37 blockdev_nvme -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:54.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.763 --rc genhtml_branch_coverage=1 00:06:54.763 --rc genhtml_function_coverage=1 00:06:54.763 --rc genhtml_legend=1 00:06:54.763 --rc geninfo_all_blocks=1 00:06:54.763 --rc geninfo_unexecuted_blocks=1 00:06:54.763 00:06:54.763 ' 00:06:54.763 15:11:37 blockdev_nvme -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:54.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.763 --rc genhtml_branch_coverage=1 00:06:54.763 --rc genhtml_function_coverage=1 00:06:54.763 --rc genhtml_legend=1 00:06:54.763 --rc geninfo_all_blocks=1 00:06:54.763 --rc geninfo_unexecuted_blocks=1 00:06:54.763 00:06:54.763 ' 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:54.763 15:11:37 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60877 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:54.763 15:11:37 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60877 00:06:54.763 15:11:37 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 60877 ']' 00:06:54.763 15:11:37 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.763 15:11:37 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.763 15:11:37 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.763 15:11:37 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.763 15:11:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.023 [2024-10-25 15:11:37.538284] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:06:55.023 [2024-10-25 15:11:37.538589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60877 ] 00:06:55.023 [2024-10-25 15:11:37.719105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.282 [2024-10-25 15:11:37.836532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.219 15:11:38 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.219 15:11:38 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:56.219 15:11:38 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:56.219 15:11:38 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:56.219 15:11:38 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:56.219 15:11:38 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:56.219 15:11:38 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:56.219 15:11:38 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:56.219 15:11:38 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.219 15:11:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.479 15:11:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.479 15:11:39 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:56.479 15:11:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.479 15:11:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.479 15:11:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.479 15:11:39 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:56.479 15:11:39 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:56.479 15:11:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.479 15:11:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.479 15:11:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.479 15:11:39 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:56.479 15:11:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.479 15:11:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.479 15:11:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.479 15:11:39 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:56.479 15:11:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.480 15:11:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.480 15:11:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.480 15:11:39 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:56.480 15:11:39 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:56.480 15:11:39 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:56.480 15:11:39 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.480 15:11:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.740 15:11:39 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.740 15:11:39 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:56.741 15:11:39 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "7ec7ff38-80d3-4071-8866-c28200ebded0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7ec7ff38-80d3-4071-8866-c28200ebded0",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "dff5246c-dcd0-458e-a6da-a0f4e078c199"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "dff5246c-dcd0-458e-a6da-a0f4e078c199",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "df1bd46a-5dbe-4a40-a233-d40b6c15df9e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "df1bd46a-5dbe-4a40-a233-d40b6c15df9e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ddac3da1-a8ab-4289-aa52-0ec282447375"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ddac3da1-a8ab-4289-aa52-0ec282447375",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "66583bf2-8cbb-4885-9adf-677ba4e4c761"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "66583bf2-8cbb-4885-9adf-677ba4e4c761",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "59f86114-ca73-42f8-bded-ac96a26e7607"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "59f86114-ca73-42f8-bded-ac96a26e7607",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:56.741 15:11:39 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:56.741 15:11:39 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:56.741 15:11:39 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:56.741 15:11:39 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:56.741 15:11:39 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 60877 00:06:56.741 15:11:39 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 60877 ']' 00:06:56.741 15:11:39 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 60877 00:06:56.741 15:11:39 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:56.741 15:11:39 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.741 15:11:39 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60877 00:06:56.741 killing process with pid 60877 00:06:56.741 15:11:39 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.741 15:11:39 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.741 15:11:39 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60877' 00:06:56.741 15:11:39 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 60877 00:06:56.741 15:11:39 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 60877 00:06:59.285 15:11:41 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:59.285 15:11:41 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:59.285 15:11:41 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:59.285 15:11:41 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.285 15:11:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.285 ************************************ 00:06:59.285 START TEST bdev_hello_world 00:06:59.286 ************************************ 00:06:59.286 15:11:41 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:59.286 [2024-10-25 15:11:41.817771] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:06:59.286 [2024-10-25 15:11:41.818165] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60977 ] 00:06:59.286 [2024-10-25 15:11:41.992716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.544 [2024-10-25 15:11:42.110849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.110 [2024-10-25 15:11:42.777833] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:00.110 [2024-10-25 15:11:42.777886] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:00.110 [2024-10-25 15:11:42.777917] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:00.110 [2024-10-25 15:11:42.781072] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:00.110 [2024-10-25 15:11:42.781817] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:00.110 [2024-10-25 15:11:42.781858] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:00.110 [2024-10-25 15:11:42.782106] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:00.110 00:07:00.110 [2024-10-25 15:11:42.782133] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:01.525 00:07:01.525 real 0m2.160s 00:07:01.525 user 0m1.805s 00:07:01.525 sys 0m0.246s 00:07:01.525 ************************************ 00:07:01.525 END TEST bdev_hello_world 00:07:01.525 ************************************ 00:07:01.525 15:11:43 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.525 15:11:43 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:01.525 15:11:43 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:01.525 15:11:43 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:01.525 15:11:43 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.525 15:11:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.525 ************************************ 00:07:01.525 START TEST bdev_bounds 00:07:01.525 ************************************ 00:07:01.525 15:11:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:01.525 15:11:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61020 00:07:01.525 15:11:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:01.525 15:11:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61020' 00:07:01.525 Process bdevio pid: 61020 00:07:01.525 15:11:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:01.525 15:11:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61020 00:07:01.525 15:11:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61020 ']' 00:07:01.525 15:11:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.525 15:11:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.525 15:11:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.525 15:11:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.525 15:11:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:01.525 [2024-10-25 15:11:44.081586] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:07:01.525 [2024-10-25 15:11:44.081949] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61020 ] 00:07:01.783 [2024-10-25 15:11:44.271489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.783 [2024-10-25 15:11:44.389731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.783 [2024-10-25 15:11:44.389919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.783 [2024-10-25 15:11:44.389970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.719 15:11:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.719 15:11:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:02.719 15:11:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:02.719 I/O targets: 00:07:02.719 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:02.720 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:02.720 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:02.720 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:02.720 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:02.720 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:02.720 00:07:02.720 00:07:02.720 CUnit - A unit testing framework for C - Version 2.1-3 00:07:02.720 http://cunit.sourceforge.net/ 00:07:02.720 00:07:02.720 00:07:02.720 Suite: bdevio tests on: Nvme3n1 00:07:02.720 Test: blockdev write read block ...passed 00:07:02.720 Test: blockdev write zeroes read block ...passed 00:07:02.720 Test: blockdev write zeroes read no split ...passed 00:07:02.720 Test: blockdev write zeroes read split ...passed 00:07:02.720 Test: blockdev write zeroes read split partial ...passed 00:07:02.720 Test: blockdev reset ...[2024-10-25 15:11:45.265487] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:02.720 [2024-10-25 15:11:45.270249] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller passed 00:07:02.720 Test: blockdev write read 8 blocks ...successful. 00:07:02.720 passed 00:07:02.720 Test: blockdev write read size > 128k ...passed 00:07:02.720 Test: blockdev write read invalid size ...passed 00:07:02.720 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:02.720 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:02.720 Test: blockdev write read max offset ...passed 00:07:02.720 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:02.720 Test: blockdev writev readv 8 blocks ...passed 00:07:02.720 Test: blockdev writev readv 30 x 1block ...passed 00:07:02.720 Test: blockdev writev readv block ...passed 00:07:02.720 Test: blockdev writev readv size > 128k ...passed 00:07:02.720 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:02.720 Test: blockdev comparev and writev ...[2024-10-25 15:11:45.280823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bbe0a000 len:0x1000 00:07:02.720 [2024-10-25 15:11:45.280901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:02.720 passed 00:07:02.720 Test: blockdev nvme passthru rw ...passed 00:07:02.720 Test: blockdev nvme passthru vendor specific ...[2024-10-25 15:11:45.281887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:07:02.720 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:02.720 [2024-10-25 15:11:45.282057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:02.720 passed 00:07:02.720 Test: blockdev copy ...passed 00:07:02.720 Suite: bdevio tests on: Nvme2n3 00:07:02.720 Test: blockdev write read block ...passed 00:07:02.720 Test: blockdev write zeroes read block ...passed 00:07:02.720 Test: blockdev write zeroes read no split ...passed 00:07:02.720 Test: blockdev write zeroes read split ...passed 00:07:02.720 Test: blockdev write zeroes read split partial ...passed 00:07:02.720 Test: blockdev reset ...[2024-10-25 15:11:45.364058] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:02.720 [2024-10-25 15:11:45.369310] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller passed 00:07:02.720 Test: blockdev write read 8 blocks ...successful. 00:07:02.720 passed 00:07:02.720 Test: blockdev write read size > 128k ...passed 00:07:02.720 Test: blockdev write read invalid size ...passed 00:07:02.720 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:02.720 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:02.720 Test: blockdev write read max offset ...passed 00:07:02.720 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:02.720 Test: blockdev writev readv 8 blocks ...passed 00:07:02.720 Test: blockdev writev readv 30 x 1block ...passed 00:07:02.720 Test: blockdev writev readv block ...passed 00:07:02.720 Test: blockdev writev readv size > 128k ...passed 00:07:02.720 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:02.720 Test: blockdev comparev and writev ...[2024-10-25 15:11:45.380723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:07:02.720 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x29f006000 len:0x1000 00:07:02.720 [2024-10-25 15:11:45.380930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:02.720 passed 00:07:02.720 Test: blockdev nvme passthru vendor specific ...passed 00:07:02.720 Test: blockdev nvme admin passthru ...[2024-10-25 15:11:45.381878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:02.720 [2024-10-25 15:11:45.381921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:02.720 passed 00:07:02.720 Test: blockdev copy ...passed 00:07:02.720 Suite: bdevio tests on: Nvme2n2 00:07:02.720 Test: blockdev write read block ...passed 00:07:02.720 Test: blockdev write zeroes read block ...passed 00:07:02.720 Test: blockdev write zeroes read no split ...passed 00:07:02.720 Test: blockdev write zeroes read split ...passed 00:07:02.979 Test: blockdev write zeroes read split partial ...passed 00:07:02.979 Test: blockdev reset ...[2024-10-25 15:11:45.466071] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:02.979 passed 00:07:02.979 Test: blockdev write read 8 blocks ...[2024-10-25 15:11:45.471199] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:02.979 passed 00:07:02.979 Test: blockdev write read size > 128k ...passed 00:07:02.979 Test: blockdev write read invalid size ...passed 00:07:02.979 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:02.979 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:02.979 Test: blockdev write read max offset ...passed 00:07:02.979 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:02.979 Test: blockdev writev readv 8 blocks ...passed 00:07:02.979 Test: blockdev writev readv 30 x 1block ...passed 00:07:02.979 Test: blockdev writev readv block ...passed 00:07:02.979 Test: blockdev writev readv size > 128k ...passed 00:07:02.979 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:02.980 Test: blockdev comparev and writev ...[2024-10-25 15:11:45.481210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d763c000 len:0x1000 00:07:02.980 [2024-10-25 15:11:45.481285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:02.980 passed 00:07:02.980 Test: blockdev nvme passthru rw ...passed 00:07:02.980 Test: blockdev nvme passthru vendor specific ...[2024-10-25 15:11:45.482352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:02.980 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:07:02.980 [2024-10-25 15:11:45.482524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:02.980 passed 00:07:02.980 Test: blockdev copy ...passed 00:07:02.980 Suite: bdevio tests on: Nvme2n1 00:07:02.980 Test: blockdev write read block ...passed 00:07:02.980 Test: blockdev write zeroes read block ...passed 00:07:02.980 Test: blockdev write zeroes read no split ...passed 00:07:02.980 Test: blockdev write zeroes read split ...passed 00:07:02.980 Test: blockdev write zeroes read split partial ...passed 00:07:02.980 Test: blockdev reset ...[2024-10-25 15:11:45.566766] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:02.980 [2024-10-25 15:11:45.571902] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:02.980 passed 00:07:02.980 Test: blockdev write read 8 blocks ...passed 00:07:02.980 Test: blockdev write read size > 128k ...passed 00:07:02.980 Test: blockdev write read invalid size ...passed 00:07:02.980 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:02.980 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:02.980 Test: blockdev write read max offset ...passed 00:07:02.980 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:02.980 Test: blockdev writev readv 8 blocks ...passed 00:07:02.980 Test: blockdev writev readv 30 x 1block ...passed 00:07:02.980 Test: blockdev writev readv block ...passed 00:07:02.980 Test: blockdev writev readv size > 128k ...passed 00:07:02.980 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:02.980 Test: blockdev comparev and writev ...[2024-10-25 15:11:45.583559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7638000 len:0x1000 00:07:02.980 [2024-10-25 15:11:45.583806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:02.980 passed 00:07:02.980 Test: blockdev nvme passthru rw ...passed 00:07:02.980 Test: blockdev nvme passthru vendor specific ...[2024-10-25 15:11:45.585239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:02.980 [2024-10-25 15:11:45.585391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:07:02.980 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:07:02.980 passed 00:07:02.980 Test: blockdev copy ...passed 00:07:02.980 Suite: bdevio tests on: Nvme1n1 00:07:02.980 Test: blockdev write read block ...passed 00:07:02.980 Test: blockdev write zeroes read block ...passed 00:07:02.980 Test: blockdev write zeroes read no split ...passed 00:07:02.980 Test: blockdev write zeroes read split ...passed 00:07:02.980 Test: blockdev write zeroes read split partial ...passed 00:07:02.980 Test: blockdev reset ...[2024-10-25 15:11:45.667334] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:02.980 [2024-10-25 15:11:45.671767] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller passed 00:07:02.980 Test: blockdev write read 8 blocks ...successful. 00:07:02.980 passed 00:07:02.980 Test: blockdev write read size > 128k ...passed 00:07:02.980 Test: blockdev write read invalid size ...passed 00:07:02.980 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:02.980 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:02.980 Test: blockdev write read max offset ...passed 00:07:02.980 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:02.980 Test: blockdev writev readv 8 blocks ...passed 00:07:02.980 Test: blockdev writev readv 30 x 1block ...passed 00:07:02.980 Test: blockdev writev readv block ...passed 00:07:02.980 Test: blockdev writev readv size > 128k ...passed 00:07:02.980 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:02.980 Test: blockdev comparev and writev ...[2024-10-25 15:11:45.681565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:02.980 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d7634000 len:0x1000 00:07:02.980 [2024-10-25 15:11:45.681739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:02.980 passed 00:07:02.980 Test: blockdev nvme passthru vendor specific ...passed 00:07:02.980 Test: blockdev nvme admin passthru ...[2024-10-25 15:11:45.682709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:02.980 [2024-10-25 15:11:45.682753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:02.980 passed 00:07:02.980 Test: blockdev copy ...passed 00:07:02.980 Suite: bdevio tests on: Nvme0n1 00:07:02.980 Test: blockdev write read block ...passed 00:07:02.980 Test: blockdev write zeroes read block ...passed 00:07:02.980 Test: blockdev write zeroes read no split ...passed 00:07:03.240 Test: blockdev write zeroes read split ...passed 00:07:03.240 Test: blockdev write zeroes read split partial ...passed 00:07:03.240 Test: blockdev reset ...[2024-10-25 15:11:45.767782] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:03.240 [2024-10-25 15:11:45.772577] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller passed 00:07:03.240 Test: blockdev write read 8 blocks ...successful. 00:07:03.240 passed 00:07:03.240 Test: blockdev write read size > 128k ...passed 00:07:03.240 Test: blockdev write read invalid size ...passed 00:07:03.240 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:03.240 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:03.240 Test: blockdev write read max offset ...passed 00:07:03.240 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:03.240 Test: blockdev writev readv 8 blocks ...passed 00:07:03.240 Test: blockdev writev readv 30 x 1block ...passed 00:07:03.240 Test: blockdev writev readv block ...passed 00:07:03.240 Test: blockdev writev readv size > 128k ...passed 00:07:03.240 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:03.240 Test: blockdev comparev and writev ...[2024-10-25 15:11:45.782679] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:03.240 separate metadata which is not supported yet. 00:07:03.240 passed 00:07:03.240 Test: blockdev nvme passthru rw ...passed 00:07:03.240 Test: blockdev nvme passthru vendor specific ...[2024-10-25 15:11:45.783570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:03.240 [2024-10-25 15:11:45.783811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:03.240 passed 00:07:03.240 Test: blockdev nvme admin passthru ...passed 00:07:03.240 Test: blockdev copy ...passed 00:07:03.240 00:07:03.240 Run Summary: Type Total Ran Passed Failed Inactive 00:07:03.240 suites 6 6 n/a 0 0 00:07:03.240 tests 138 138 138 0 0 00:07:03.240 asserts 893 893 893 0 n/a 00:07:03.240 00:07:03.240 Elapsed time = 1.620 seconds 00:07:03.240 0 00:07:03.240 15:11:45 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61020 00:07:03.240 15:11:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61020 ']' 00:07:03.240 15:11:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61020 00:07:03.240 15:11:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:03.240 15:11:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.240 15:11:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61020 00:07:03.240 15:11:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.240 killing process with pid 61020 00:07:03.240 15:11:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.240 15:11:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61020' 00:07:03.240 15:11:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61020 00:07:03.240 15:11:45 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61020 00:07:04.177 15:11:46 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:04.177 00:07:04.177 real 0m2.913s 00:07:04.177 user 0m7.411s 00:07:04.177 sys 0m0.440s 00:07:04.177 15:11:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.177 15:11:46 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:04.177 ************************************ 00:07:04.177 END TEST bdev_bounds 00:07:04.177 ************************************ 00:07:04.436 15:11:46 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:04.436 15:11:46 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:04.436 15:11:46 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.436 15:11:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:04.436 ************************************ 00:07:04.436 START TEST bdev_nbd 00:07:04.436 ************************************ 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61085 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61085 /var/tmp/spdk-nbd.sock 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 61085 ']' 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:04.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.436 15:11:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:04.436 [2024-10-25 15:11:47.078580] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:07:04.436 [2024-10-25 15:11:47.078957] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.695 [2024-10-25 15:11:47.264784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.695 [2024-10-25 15:11:47.387668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.632 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.632 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:05.632 15:11:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:05.632 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.632 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:05.632 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:05.632 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:05.632 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.632 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:05.632 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:05.632 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:05.632 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:05.632 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:05.632 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:05.632 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:05.891 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:05.891 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:05.891 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:05.891 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:05.891 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:05.891 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.891 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.891 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:05.891 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:05.891 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.891 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.891 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.891 1+0 records in 00:07:05.891 1+0 records out 00:07:05.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000666131 s, 6.1 MB/s 00:07:05.891 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.891 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:05.891 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.891 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.891 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:05.891 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.891 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:05.891 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:06.150 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:06.150 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:06.150 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:06.150 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:06.150 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:06.150 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:06.150 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:06.150 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:06.150 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:06.150 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:06.150 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:06.150 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.150 1+0 records in 00:07:06.150 1+0 records out 00:07:06.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000745614 s, 5.5 MB/s 00:07:06.150 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.150 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:06.150 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.150 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:06.150 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:06.151 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:06.151 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:06.151 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:06.409 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:06.409 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:06.409 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:06.409 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:06.409 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:06.409 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:06.409 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:06.409 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:06.409 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:06.409 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:06.409 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:06.409 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.409 1+0 records in 00:07:06.409 1+0 records out 00:07:06.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594753 s, 6.9 MB/s 00:07:06.409 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.409 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:06.409 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.409 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:06.409 15:11:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:06.409 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:06.409 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:06.409 15:11:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:06.669 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:06.669 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:06.669 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:06.669 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:06.669 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:06.669 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:06.669 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:06.669 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:06.669 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:06.669 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:06.669 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:06.669 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.669 1+0 records in 00:07:06.669 1+0 records out 00:07:06.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000848331 s, 4.8 MB/s 00:07:06.669 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.669 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:06.669 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.669 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:06.669 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:06.669 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:06.669 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:06.669 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:06.928 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:06.928 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:06.928 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:06.928 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:06.928 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:06.928 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:06.928 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:06.928 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:06.928 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:06.928 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:06.928 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:06.928 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.928 1+0 records in 00:07:06.928 1+0 records out 00:07:06.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000842906 s, 4.9 MB/s 00:07:06.928 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.928 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:06.928 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.928 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:06.928 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:06.928 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:06.928 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:06.928 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:07.187 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:07.187 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:07.187 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:07.187 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:07.187 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:07.187 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:07.187 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:07.187 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:07.187 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:07.187 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:07.187 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:07.187 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.187 1+0 records in 00:07:07.187 1+0 records out 00:07:07.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00151974 s, 2.7 MB/s 00:07:07.187 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.187 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:07.187 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.187 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:07.187 15:11:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:07.187 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:07.187 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:07.187 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.446 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:07.446 { 00:07:07.446 "nbd_device": "/dev/nbd0", 00:07:07.446 "bdev_name": "Nvme0n1" 00:07:07.446 }, 00:07:07.446 { 00:07:07.446 "nbd_device": "/dev/nbd1", 00:07:07.446 "bdev_name": "Nvme1n1" 00:07:07.446 }, 00:07:07.446 { 00:07:07.446 "nbd_device": "/dev/nbd2", 00:07:07.446 "bdev_name": "Nvme2n1" 00:07:07.446 }, 00:07:07.446 { 00:07:07.446 "nbd_device": "/dev/nbd3", 00:07:07.446 "bdev_name": "Nvme2n2" 00:07:07.446 }, 00:07:07.446 { 00:07:07.446 "nbd_device": "/dev/nbd4", 00:07:07.446 "bdev_name": "Nvme2n3" 00:07:07.446 }, 00:07:07.446 { 00:07:07.446 "nbd_device": "/dev/nbd5", 00:07:07.446 "bdev_name": "Nvme3n1" 00:07:07.446 } 00:07:07.446 ]' 00:07:07.446 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:07.446 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:07.446 15:11:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:07.446 { 00:07:07.446 "nbd_device": "/dev/nbd0", 00:07:07.446 "bdev_name": "Nvme0n1" 00:07:07.446 }, 00:07:07.446 { 00:07:07.446 "nbd_device": "/dev/nbd1", 00:07:07.446 "bdev_name": "Nvme1n1" 00:07:07.446 }, 00:07:07.446 { 00:07:07.446 "nbd_device": "/dev/nbd2", 00:07:07.446 "bdev_name": "Nvme2n1" 00:07:07.446 }, 00:07:07.446 { 00:07:07.446 "nbd_device": "/dev/nbd3", 00:07:07.446 "bdev_name": "Nvme2n2" 00:07:07.446 }, 00:07:07.446 { 00:07:07.446 "nbd_device": "/dev/nbd4", 00:07:07.446 "bdev_name": "Nvme2n3" 00:07:07.446 }, 00:07:07.446 { 00:07:07.446 "nbd_device": "/dev/nbd5", 00:07:07.446 "bdev_name": "Nvme3n1" 00:07:07.446 } 00:07:07.446 ]' 00:07:07.446 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:07.446 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.446 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:07.446 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:07.446 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:07.446 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.446 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:07.706 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:07.706 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:07.706 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:07.706 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.706 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.706 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:07.706 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.706 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.706 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.706 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:07.965 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:07.965 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:07.965 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:07.965 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.965 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.965 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:07.965 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.965 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.965 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.965 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:08.240 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:08.240 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:08.240 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:08.240 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.240 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.240 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:08.240 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.240 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.240 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.240 15:11:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:08.498 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:08.499 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:08.499 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:08.499 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.499 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.499 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:08.499 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.499 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.499 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.499 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:08.757 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:08.757 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:08.757 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:08.757 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.757 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.757 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:08.757 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.757 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.757 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.757 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:09.015 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:09.015 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:09.015 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:09.015 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:09.015 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:09.015 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:09.015 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:09.015 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:09.015 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.015 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.015 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:09.274 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:09.275 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:09.275 15:11:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:09.533 /dev/nbd0 00:07:09.533 15:11:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:09.533 15:11:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:09.533 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:09.533 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:09.533 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:09.533 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:09.533 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:09.533 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:09.533 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:09.533 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:09.533 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.533 1+0 records in 00:07:09.533 1+0 records out 00:07:09.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000861221 s, 4.8 MB/s 00:07:09.534 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.534 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:09.534 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.534 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:09.534 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:09.534 15:11:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.534 15:11:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:09.534 15:11:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:09.792 /dev/nbd1 00:07:09.792 15:11:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:09.792 15:11:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:09.792 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:09.792 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:09.792 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:09.792 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:09.792 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:09.792 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:09.792 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:09.792 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:09.792 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.792 1+0 records in 00:07:09.792 1+0 records out 00:07:09.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524858 s, 7.8 MB/s 00:07:09.792 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.792 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:09.792 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.792 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:09.792 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:09.792 15:11:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.792 15:11:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:09.792 15:11:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:10.051 /dev/nbd10 00:07:10.051 15:11:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:10.051 15:11:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:10.051 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:10.051 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:10.051 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:10.051 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:10.051 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:10.051 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:10.051 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:10.051 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:10.051 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.051 1+0 records in 00:07:10.051 1+0 records out 00:07:10.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461981 s, 8.9 MB/s 00:07:10.051 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.051 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:10.051 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.051 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:10.051 15:11:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:10.051 15:11:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.051 15:11:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:10.051 15:11:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:10.309 /dev/nbd11 00:07:10.309 15:11:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:10.309 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:10.309 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:10.309 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:10.309 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:10.310 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:10.310 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:10.310 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:10.310 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:10.310 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:10.310 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.310 1+0 records in 00:07:10.310 1+0 records out 00:07:10.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000923965 s, 4.4 MB/s 00:07:10.310 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.310 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:10.310 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.310 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:10.310 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:10.310 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.310 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:10.310 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:10.569 /dev/nbd12 00:07:10.569 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:10.569 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:10.569 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:10.569 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:10.569 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:10.569 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:10.569 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:10.569 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:10.569 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:10.569 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:10.569 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.569 1+0 records in 00:07:10.569 1+0 records out 00:07:10.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734868 s, 5.6 MB/s 00:07:10.569 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.827 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:10.827 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.827 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:10.827 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:10.827 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:10.827 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:10.827 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:10.827 /dev/nbd13 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.086 1+0 records in 00:07:11.086 1+0 records out 00:07:11.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000919447 s, 4.5 MB/s 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.086 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.346 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:11.346 { 00:07:11.346 "nbd_device": "/dev/nbd0", 00:07:11.346 "bdev_name": "Nvme0n1" 00:07:11.346 }, 00:07:11.346 { 00:07:11.346 "nbd_device": "/dev/nbd1", 00:07:11.346 "bdev_name": "Nvme1n1" 00:07:11.346 }, 00:07:11.346 { 00:07:11.346 "nbd_device": "/dev/nbd10", 00:07:11.346 "bdev_name": "Nvme2n1" 00:07:11.346 }, 00:07:11.346 { 00:07:11.346 "nbd_device": "/dev/nbd11", 00:07:11.346 "bdev_name": "Nvme2n2" 00:07:11.346 }, 00:07:11.346 { 00:07:11.346 "nbd_device": "/dev/nbd12", 00:07:11.346 "bdev_name": "Nvme2n3" 00:07:11.346 }, 00:07:11.346 { 00:07:11.346 "nbd_device": "/dev/nbd13", 00:07:11.346 "bdev_name": "Nvme3n1" 00:07:11.346 } 00:07:11.346 ]' 00:07:11.346 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:11.346 { 00:07:11.346 "nbd_device": "/dev/nbd0", 00:07:11.346 "bdev_name": "Nvme0n1" 00:07:11.346 }, 00:07:11.346 { 00:07:11.346 "nbd_device": "/dev/nbd1", 00:07:11.346 "bdev_name": "Nvme1n1" 00:07:11.346 }, 00:07:11.346 { 00:07:11.346 "nbd_device": "/dev/nbd10", 00:07:11.346 "bdev_name": "Nvme2n1" 00:07:11.346 }, 00:07:11.346 { 00:07:11.346 "nbd_device": "/dev/nbd11", 00:07:11.346 "bdev_name": "Nvme2n2" 00:07:11.346 }, 00:07:11.346 { 00:07:11.346 "nbd_device": "/dev/nbd12", 00:07:11.346 "bdev_name": "Nvme2n3" 00:07:11.346 }, 00:07:11.346 { 00:07:11.346 "nbd_device": "/dev/nbd13", 00:07:11.346 "bdev_name": "Nvme3n1" 00:07:11.346 } 00:07:11.346 ]' 00:07:11.346 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.346 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:11.346 /dev/nbd1 00:07:11.346 /dev/nbd10 00:07:11.346 /dev/nbd11 00:07:11.346 /dev/nbd12 00:07:11.346 /dev/nbd13' 00:07:11.346 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:11.346 /dev/nbd1 00:07:11.346 /dev/nbd10 00:07:11.346 /dev/nbd11 00:07:11.346 /dev/nbd12 00:07:11.346 /dev/nbd13' 00:07:11.346 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.346 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:11.346 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:11.346 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:11.346 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:11.346 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:11.346 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:11.346 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.346 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:11.346 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:11.346 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:11.346 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:11.346 256+0 records in 00:07:11.346 256+0 records out 00:07:11.346 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123071 s, 85.2 MB/s 00:07:11.346 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.346 15:11:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:11.604 256+0 records in 00:07:11.604 256+0 records out 00:07:11.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122518 s, 8.6 MB/s 00:07:11.604 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.604 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:11.604 256+0 records in 00:07:11.604 256+0 records out 00:07:11.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132952 s, 7.9 MB/s 00:07:11.604 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.604 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:11.863 256+0 records in 00:07:11.863 256+0 records out 00:07:11.863 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129319 s, 8.1 MB/s 00:07:11.863 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.863 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:11.863 256+0 records in 00:07:11.863 256+0 records out 00:07:11.863 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128711 s, 8.1 MB/s 00:07:11.863 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.863 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:12.122 256+0 records in 00:07:12.122 256+0 records out 00:07:12.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129789 s, 8.1 MB/s 00:07:12.122 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.122 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:12.122 256+0 records in 00:07:12.122 256+0 records out 00:07:12.122 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125562 s, 8.4 MB/s 00:07:12.122 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:12.122 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:12.122 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.122 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:12.122 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:12.122 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:12.122 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:12.122 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.122 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:12.122 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.122 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:12.122 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.122 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:12.122 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.122 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:12.123 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.123 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:12.382 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.382 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:12.382 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:12.382 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:12.382 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.382 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:12.382 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.382 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:12.382 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.382 15:11:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.382 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.642 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.642 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.642 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.642 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.642 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.642 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.642 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.642 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.642 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:12.642 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:12.642 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:12.642 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:12.642 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.642 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.642 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:12.642 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.642 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.642 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.642 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:12.901 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:12.901 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:12.901 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:12.901 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.901 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.901 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:12.901 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.901 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.901 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.901 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:13.214 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:13.214 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:13.214 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:13.214 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.214 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.214 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:13.214 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.214 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.214 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.214 15:11:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:13.473 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:13.473 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:13.473 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:13.473 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.473 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.473 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:13.473 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.473 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.473 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.473 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:13.731 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:13.731 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:13.731 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:13.731 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.731 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.731 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:13.731 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.731 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.731 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.731 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.731 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.989 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.989 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.989 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.989 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.989 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.989 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.989 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:13.989 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.989 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.989 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:13.989 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:13.989 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:13.989 15:11:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:13.990 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.990 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:13.990 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:14.249 malloc_lvol_verify 00:07:14.249 15:11:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:14.509 4ca28f07-145a-46e2-b9e9-2efe0aa06eff 00:07:14.509 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:14.768 ef137a7f-39ec-47a1-a9f2-4e10d6d4e3b6 00:07:14.768 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:15.028 /dev/nbd0 00:07:15.028 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:15.028 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:15.028 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:15.028 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:15.028 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:15.028 mke2fs 1.47.0 (5-Feb-2023) 00:07:15.028 Discarding device blocks: 0/4096 done 00:07:15.028 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:15.028 00:07:15.028 Allocating group tables: 0/1 done 00:07:15.028 Writing inode tables: 0/1 done 00:07:15.028 Creating journal (1024 blocks): done 00:07:15.028 Writing superblocks and filesystem accounting information: 0/1 done 00:07:15.028 00:07:15.028 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:15.028 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.028 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:15.028 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:15.028 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:15.028 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.028 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:15.288 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:15.288 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:15.288 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:15.288 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.288 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.288 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:15.288 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.288 15:11:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.288 15:11:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61085 00:07:15.288 15:11:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 61085 ']' 00:07:15.288 15:11:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 61085 00:07:15.288 15:11:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:15.288 15:11:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.288 15:11:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61085 00:07:15.288 killing process with pid 61085 00:07:15.288 15:11:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.288 15:11:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.288 15:11:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61085' 00:07:15.288 15:11:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 61085 00:07:15.288 15:11:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 61085 00:07:16.666 15:11:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:16.666 00:07:16.666 real 0m12.244s 00:07:16.666 user 0m16.074s 00:07:16.666 sys 0m4.988s 00:07:16.666 15:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.666 ************************************ 00:07:16.666 END TEST bdev_nbd 00:07:16.666 ************************************ 00:07:16.666 15:11:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:16.666 15:11:59 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:16.666 15:11:59 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:16.666 skipping fio tests on NVMe due to multi-ns failures. 00:07:16.666 15:11:59 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:16.666 15:11:59 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:16.666 15:11:59 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:16.666 15:11:59 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:16.666 15:11:59 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.666 15:11:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:16.666 ************************************ 00:07:16.666 START TEST bdev_verify 00:07:16.666 ************************************ 00:07:16.666 15:11:59 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:16.666 [2024-10-25 15:11:59.375422] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:07:16.666 [2024-10-25 15:11:59.375565] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61484 ] 00:07:16.925 [2024-10-25 15:11:59.562035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.185 [2024-10-25 15:11:59.685355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.185 [2024-10-25 15:11:59.685397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.751 Running I/O for 5 seconds... 00:07:20.060 20096.00 IOPS, 78.50 MiB/s [2024-10-25T15:12:03.722Z] 20544.00 IOPS, 80.25 MiB/s [2024-10-25T15:12:04.658Z] 21461.33 IOPS, 83.83 MiB/s [2024-10-25T15:12:05.596Z] 21984.00 IOPS, 85.88 MiB/s [2024-10-25T15:12:05.596Z] 21440.00 IOPS, 83.75 MiB/s 00:07:22.868 Latency(us) 00:07:22.868 [2024-10-25T15:12:05.596Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.868 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.868 Verification LBA range: start 0x0 length 0xbd0bd 00:07:22.868 Nvme0n1 : 5.07 1742.38 6.81 0.00 0.00 73298.51 14949.58 79169.59 00:07:22.868 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.868 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:22.868 Nvme0n1 : 5.06 1794.71 7.01 0.00 0.00 71144.68 14844.30 81275.17 00:07:22.868 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.868 Verification LBA range: start 0x0 length 0xa0000 00:07:22.868 Nvme1n1 : 5.07 1741.24 6.80 0.00 0.00 73210.53 15791.81 72010.64 00:07:22.868 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.868 Verification LBA range: start 0xa0000 length 0xa0000 00:07:22.868 Nvme1n1 : 5.07 1794.18 7.01 0.00 0.00 71036.95 17370.99 75800.67 00:07:22.868 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.868 Verification LBA range: start 0x0 length 0x80000 00:07:22.868 Nvme2n1 : 5.07 1740.68 6.80 0.00 0.00 73057.25 16212.92 60219.42 00:07:22.868 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.868 Verification LBA range: start 0x80000 length 0x80000 00:07:22.868 Nvme2n1 : 5.07 1793.64 7.01 0.00 0.00 70826.42 17897.38 61903.88 00:07:22.868 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.868 Verification LBA range: start 0x0 length 0x80000 00:07:22.868 Nvme2n2 : 5.08 1740.21 6.80 0.00 0.00 72961.08 15791.81 60640.54 00:07:22.868 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.868 Verification LBA range: start 0x80000 length 0x80000 00:07:22.868 Nvme2n2 : 5.07 1793.10 7.00 0.00 0.00 70717.11 17792.10 61061.65 00:07:22.868 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.868 Verification LBA range: start 0x0 length 0x80000 00:07:22.868 Nvme2n3 : 5.08 1739.72 6.80 0.00 0.00 72849.64 15370.69 58534.97 00:07:22.868 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.868 Verification LBA range: start 0x80000 length 0x80000 00:07:22.868 Nvme2n3 : 5.07 1792.63 7.00 0.00 0.00 70621.41 18002.66 63167.23 00:07:22.868 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.868 Verification LBA range: start 0x0 length 0x20000 00:07:22.868 Nvme3n1 : 5.08 1739.15 6.79 0.00 0.00 72749.09 15370.69 61903.88 00:07:22.868 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.868 Verification LBA range: start 0x20000 length 0x20000 00:07:22.868 Nvme3n1 : 5.08 1802.27 7.04 0.00 0.00 70163.81 2671.45 63167.23 00:07:22.868 [2024-10-25T15:12:05.596Z] =================================================================================================================== 00:07:22.868 [2024-10-25T15:12:05.596Z] Total : 21213.91 82.87 0.00 0.00 71869.15 2671.45 81275.17 00:07:24.773 00:07:24.773 real 0m7.741s 00:07:24.773 user 0m14.257s 00:07:24.773 sys 0m0.347s 00:07:24.773 15:12:07 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.773 15:12:07 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:24.773 ************************************ 00:07:24.773 END TEST bdev_verify 00:07:24.773 ************************************ 00:07:24.773 15:12:07 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:24.773 15:12:07 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:24.773 15:12:07 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.773 15:12:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:24.773 ************************************ 00:07:24.773 START TEST bdev_verify_big_io 00:07:24.773 ************************************ 00:07:24.773 15:12:07 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:24.773 [2024-10-25 15:12:07.189300] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:07:24.773 [2024-10-25 15:12:07.189429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61583 ] 00:07:24.773 [2024-10-25 15:12:07.372764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:24.773 [2024-10-25 15:12:07.496919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.773 [2024-10-25 15:12:07.496954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.712 Running I/O for 5 seconds... 00:07:29.591 2673.00 IOPS, 167.06 MiB/s [2024-10-25T15:12:14.225Z] 2626.50 IOPS, 164.16 MiB/s [2024-10-25T15:12:14.483Z] 2603.67 IOPS, 162.73 MiB/s [2024-10-25T15:12:14.483Z] 3206.25 IOPS, 200.39 MiB/s 00:07:31.755 Latency(us) 00:07:31.755 [2024-10-25T15:12:14.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.755 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:31.755 Verification LBA range: start 0x0 length 0xbd0b 00:07:31.755 Nvme0n1 : 5.62 136.38 8.52 0.00 0.00 929358.32 9001.33 990462.15 00:07:31.755 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:31.755 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:31.755 Nvme0n1 : 5.32 216.73 13.55 0.00 0.00 566238.41 16212.92 970248.64 00:07:31.755 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:31.755 Verification LBA range: start 0x0 length 0xa000 00:07:31.755 Nvme1n1 : 5.63 132.80 8.30 0.00 0.00 938424.62 8790.77 943297.29 00:07:31.755 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:31.755 Verification LBA range: start 0xa000 length 0xa000 00:07:31.755 Nvme1n1 : 5.64 223.52 13.97 0.00 0.00 524044.69 56850.51 693997.29 00:07:31.755 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:31.755 Verification LBA range: start 0x0 length 0x8000 00:07:31.755 Nvme2n1 : 5.63 132.94 8.31 0.00 0.00 922891.16 9633.00 950035.12 00:07:31.755 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:31.755 Verification LBA range: start 0x8000 length 0x8000 00:07:31.755 Nvme2n1 : 5.68 228.93 14.31 0.00 0.00 499244.30 38532.01 481755.40 00:07:31.755 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:31.755 Verification LBA range: start 0x0 length 0x8000 00:07:31.755 Nvme2n2 : 5.63 122.78 7.67 0.00 0.00 982680.09 9527.72 1967448.62 00:07:31.755 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:31.755 Verification LBA range: start 0x8000 length 0x8000 00:07:31.755 Nvme2n2 : 5.75 244.93 15.31 0.00 0.00 454517.26 25266.89 491862.16 00:07:31.755 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:31.755 Verification LBA range: start 0x0 length 0x8000 00:07:31.755 Nvme2n3 : 5.63 123.11 7.69 0.00 0.00 964438.79 9633.00 1994399.97 00:07:31.755 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:31.755 Verification LBA range: start 0x8000 length 0x8000 00:07:31.755 Nvme2n3 : 5.83 266.78 16.67 0.00 0.00 409232.17 13159.84 498599.99 00:07:31.755 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:31.755 Verification LBA range: start 0x0 length 0x2000 00:07:31.755 Nvme3n1 : 5.63 122.56 7.66 0.00 0.00 952750.78 9738.28 2034827.00 00:07:31.755 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:31.755 Verification LBA range: start 0x2000 length 0x2000 00:07:31.755 Nvme3n1 : 5.90 307.43 19.21 0.00 0.00 350530.04 835.65 569347.29 00:07:31.755 [2024-10-25T15:12:14.483Z] =================================================================================================================== 00:07:31.755 [2024-10-25T15:12:14.483Z] Total : 2258.88 141.18 0.00 0.00 622439.51 835.65 2034827.00 00:07:33.660 00:07:33.660 real 0m9.249s 00:07:33.660 user 0m17.118s 00:07:33.660 sys 0m0.432s 00:07:33.660 ************************************ 00:07:33.660 END TEST bdev_verify_big_io 00:07:33.660 ************************************ 00:07:33.660 15:12:16 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.660 15:12:16 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:33.918 15:12:16 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.918 15:12:16 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:33.918 15:12:16 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.918 15:12:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:33.918 ************************************ 00:07:33.918 START TEST bdev_write_zeroes 00:07:33.918 ************************************ 00:07:33.919 15:12:16 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.919 [2024-10-25 15:12:16.500202] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:07:33.919 [2024-10-25 15:12:16.500328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61698 ] 00:07:34.178 [2024-10-25 15:12:16.685340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.178 [2024-10-25 15:12:16.825330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.115 Running I/O for 1 seconds... 00:07:36.104 72512.00 IOPS, 283.25 MiB/s 00:07:36.104 Latency(us) 00:07:36.104 [2024-10-25T15:12:18.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.105 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.105 Nvme0n1 : 1.02 12057.95 47.10 0.00 0.00 10591.10 8738.13 24845.78 00:07:36.105 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.105 Nvme1n1 : 1.02 12045.31 47.05 0.00 0.00 10589.37 8738.13 24740.50 00:07:36.105 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.105 Nvme2n1 : 1.02 12034.12 47.01 0.00 0.00 10558.41 8632.85 23371.87 00:07:36.105 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.105 Nvme2n2 : 1.02 12059.21 47.11 0.00 0.00 10488.03 5685.05 23056.04 00:07:36.105 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.105 Nvme2n3 : 1.02 12019.64 46.95 0.00 0.00 10500.52 5658.73 23898.27 00:07:36.105 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.105 Nvme3n1 : 1.02 11946.10 46.66 0.00 0.00 10554.25 7369.51 24319.38 00:07:36.105 [2024-10-25T15:12:18.833Z] =================================================================================================================== 00:07:36.105 [2024-10-25T15:12:18.833Z] Total : 72162.32 281.88 0.00 0.00 10546.89 5658.73 24845.78 00:07:37.484 00:07:37.484 real 0m3.469s 00:07:37.484 user 0m2.992s 00:07:37.484 sys 0m0.358s 00:07:37.484 15:12:19 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.484 ************************************ 00:07:37.484 END TEST bdev_write_zeroes 00:07:37.484 ************************************ 00:07:37.484 15:12:19 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:37.484 15:12:19 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:37.484 15:12:19 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:37.484 15:12:19 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.484 15:12:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.484 ************************************ 00:07:37.484 START TEST bdev_json_nonenclosed 00:07:37.484 ************************************ 00:07:37.484 15:12:19 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:37.484 [2024-10-25 15:12:20.030677] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:07:37.484 [2024-10-25 15:12:20.030802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61756 ] 00:07:37.743 [2024-10-25 15:12:20.216571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.743 [2024-10-25 15:12:20.369016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.743 [2024-10-25 15:12:20.369141] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:37.743 [2024-10-25 15:12:20.369188] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:37.743 [2024-10-25 15:12:20.369206] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.002 00:07:38.002 real 0m0.724s 00:07:38.002 user 0m0.459s 00:07:38.002 sys 0m0.159s 00:07:38.002 15:12:20 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.002 15:12:20 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:38.002 ************************************ 00:07:38.002 END TEST bdev_json_nonenclosed 00:07:38.002 ************************************ 00:07:38.002 15:12:20 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.002 15:12:20 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:38.002 15:12:20 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.002 15:12:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.260 ************************************ 00:07:38.260 START TEST bdev_json_nonarray 00:07:38.260 ************************************ 00:07:38.260 15:12:20 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.260 [2024-10-25 15:12:20.841076] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:07:38.260 [2024-10-25 15:12:20.841691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61782 ] 00:07:38.517 [2024-10-25 15:12:21.030116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.517 [2024-10-25 15:12:21.177932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.517 [2024-10-25 15:12:21.178066] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:38.517 [2024-10-25 15:12:21.178093] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:38.517 [2024-10-25 15:12:21.178107] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.774 00:07:38.774 real 0m0.713s 00:07:38.774 user 0m0.447s 00:07:38.774 sys 0m0.161s 00:07:38.774 15:12:21 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.774 15:12:21 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:38.774 ************************************ 00:07:38.774 END TEST bdev_json_nonarray 00:07:38.774 ************************************ 00:07:39.033 15:12:21 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:39.033 15:12:21 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:39.033 15:12:21 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:39.033 15:12:21 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:39.033 15:12:21 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:39.033 15:12:21 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:39.033 15:12:21 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:39.033 15:12:21 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:39.033 15:12:21 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:39.033 15:12:21 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:39.033 15:12:21 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:39.033 00:07:39.033 real 0m44.340s 00:07:39.033 user 1m5.291s 00:07:39.033 sys 0m8.310s 00:07:39.033 15:12:21 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.033 15:12:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:39.033 ************************************ 00:07:39.033 END TEST blockdev_nvme 00:07:39.033 ************************************ 00:07:39.033 15:12:21 -- spdk/autotest.sh@209 -- # uname -s 00:07:39.033 15:12:21 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:39.033 15:12:21 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:39.033 15:12:21 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:39.033 15:12:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.033 15:12:21 -- common/autotest_common.sh@10 -- # set +x 00:07:39.033 ************************************ 00:07:39.033 START TEST blockdev_nvme_gpt 00:07:39.033 ************************************ 00:07:39.033 15:12:21 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:39.033 * Looking for test storage... 00:07:39.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:39.033 15:12:21 blockdev_nvme_gpt -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:39.033 15:12:21 blockdev_nvme_gpt -- common/autotest_common.sh@1689 -- # lcov --version 00:07:39.033 15:12:21 blockdev_nvme_gpt -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:39.291 15:12:21 blockdev_nvme_gpt -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:39.291 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.291 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.291 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.291 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.291 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.291 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.291 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.291 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.291 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.291 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.291 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.291 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:39.292 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:39.292 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.292 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.292 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:39.292 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:39.292 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.292 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:39.292 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.292 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:39.292 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:39.292 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.292 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:39.292 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.292 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.292 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.292 15:12:21 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:39.292 15:12:21 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.292 15:12:21 blockdev_nvme_gpt -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:39.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.292 --rc genhtml_branch_coverage=1 00:07:39.292 --rc genhtml_function_coverage=1 00:07:39.292 --rc genhtml_legend=1 00:07:39.292 --rc geninfo_all_blocks=1 00:07:39.292 --rc geninfo_unexecuted_blocks=1 00:07:39.292 00:07:39.292 ' 00:07:39.292 15:12:21 blockdev_nvme_gpt -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:39.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.292 --rc genhtml_branch_coverage=1 00:07:39.292 --rc genhtml_function_coverage=1 00:07:39.292 --rc genhtml_legend=1 00:07:39.292 --rc geninfo_all_blocks=1 00:07:39.292 --rc geninfo_unexecuted_blocks=1 00:07:39.292 00:07:39.292 ' 00:07:39.292 15:12:21 blockdev_nvme_gpt -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:39.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.292 --rc genhtml_branch_coverage=1 00:07:39.292 --rc genhtml_function_coverage=1 00:07:39.292 --rc genhtml_legend=1 00:07:39.292 --rc geninfo_all_blocks=1 00:07:39.292 --rc geninfo_unexecuted_blocks=1 00:07:39.292 00:07:39.292 ' 00:07:39.292 15:12:21 blockdev_nvme_gpt -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:39.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.292 --rc genhtml_branch_coverage=1 00:07:39.292 --rc genhtml_function_coverage=1 00:07:39.292 --rc genhtml_legend=1 00:07:39.292 --rc geninfo_all_blocks=1 00:07:39.292 --rc geninfo_unexecuted_blocks=1 00:07:39.292 00:07:39.292 ' 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61866 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61866 00:07:39.292 15:12:21 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:39.292 15:12:21 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 61866 ']' 00:07:39.292 15:12:21 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.292 15:12:21 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.292 15:12:21 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.292 15:12:21 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.292 15:12:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:39.292 [2024-10-25 15:12:21.978753] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:07:39.292 [2024-10-25 15:12:21.978897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61866 ] 00:07:39.549 [2024-10-25 15:12:22.169524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.806 [2024-10-25 15:12:22.313362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.739 15:12:23 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.739 15:12:23 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:40.739 15:12:23 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:40.739 15:12:23 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:40.739 15:12:23 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:41.378 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:41.638 Waiting for block devices as requested 00:07:41.638 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:41.638 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:41.897 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:41.897 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.171 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:47.171 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:47.171 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # zoned_devs=() 00:07:47.171 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # local -gA zoned_devs 00:07:47.171 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1654 -- # local nvme bdf 00:07:47.171 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:07:47.171 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme0n1 00:07:47.171 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:07:47.171 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:47.171 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:07:47.171 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:07:47.171 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme1n1 00:07:47.171 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme1n1 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n1 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme2n1 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n2 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme2n2 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n3 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme2n3 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3c3n1 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme3c3n1 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3n1 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1646 -- # local device=nvme3n1 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:47.172 15:12:29 blockdev_nvme_gpt -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:07:47.172 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:47.172 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:47.172 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:47.172 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:47.172 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:47.172 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:47.172 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:47.172 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:47.172 BYT; 00:07:47.172 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:47.172 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:47.172 BYT; 00:07:47.172 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:47.172 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:47.172 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:47.172 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:47.172 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:47.172 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:47.172 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:47.172 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:47.172 15:12:29 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:47.172 15:12:29 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:47.172 15:12:29 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:47.172 15:12:29 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:47.172 15:12:29 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:47.172 15:12:29 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:47.172 15:12:29 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:47.172 15:12:29 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:47.172 15:12:29 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:47.172 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:47.172 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:47.172 15:12:29 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:47.172 15:12:29 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:47.172 15:12:29 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:47.172 15:12:29 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:47.172 15:12:29 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:47.172 15:12:29 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:47.172 15:12:29 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:47.172 15:12:29 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:47.172 15:12:29 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:47.172 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:47.172 15:12:29 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:48.163 The operation has completed successfully. 00:07:48.163 15:12:30 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:49.102 The operation has completed successfully. 00:07:49.102 15:12:31 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:50.052 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:50.620 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.620 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.620 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.620 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:50.879 15:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:50.879 15:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.879 15:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:50.879 [] 00:07:50.879 15:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.879 15:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:50.879 15:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:50.879 15:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:50.879 15:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:50.879 15:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:50.879 15:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.879 15:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.137 15:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.137 15:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:51.137 15:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.137 15:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.137 15:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.137 15:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:51.137 15:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:51.137 15:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.137 15:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.396 15:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.396 15:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:51.396 15:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.396 15:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.396 15:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.396 15:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:51.396 15:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.396 15:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.396 15:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.396 15:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:51.396 15:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:51.396 15:12:33 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:51.396 15:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.396 15:12:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.396 15:12:34 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.396 15:12:34 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:51.396 15:12:34 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:51.397 15:12:34 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "55075cdb-87e2-4b80-93b9-6d27d97c5dbb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "55075cdb-87e2-4b80-93b9-6d27d97c5dbb",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ee82f6bd-b792-4c9d-bfb2-1ad6908f6f6f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ee82f6bd-b792-4c9d-bfb2-1ad6908f6f6f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "c1601543-7960-447f-80d2-55bec191721f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c1601543-7960-447f-80d2-55bec191721f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "87e3b51e-c363-448c-8dcd-cca13f3126bb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "87e3b51e-c363-448c-8dcd-cca13f3126bb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "596afe43-f11b-49c3-8d01-d46f5efdfeb8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "596afe43-f11b-49c3-8d01-d46f5efdfeb8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:51.656 15:12:34 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:51.656 15:12:34 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:51.656 15:12:34 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:51.656 15:12:34 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61866 00:07:51.656 15:12:34 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 61866 ']' 00:07:51.656 15:12:34 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 61866 00:07:51.656 15:12:34 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:51.656 15:12:34 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:51.656 15:12:34 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61866 00:07:51.656 15:12:34 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:51.656 15:12:34 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:51.656 killing process with pid 61866 00:07:51.656 15:12:34 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61866' 00:07:51.656 15:12:34 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 61866 00:07:51.656 15:12:34 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 61866 00:07:54.232 15:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:54.232 15:12:36 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:54.232 15:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:54.232 15:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.232 15:12:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:54.232 ************************************ 00:07:54.232 START TEST bdev_hello_world 00:07:54.232 ************************************ 00:07:54.232 15:12:36 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:54.232 [2024-10-25 15:12:36.751820] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:07:54.232 [2024-10-25 15:12:36.751967] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62515 ] 00:07:54.232 [2024-10-25 15:12:36.942356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.490 [2024-10-25 15:12:37.061891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.059 [2024-10-25 15:12:37.717212] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:55.059 [2024-10-25 15:12:37.717275] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:55.059 [2024-10-25 15:12:37.717305] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:55.059 [2024-10-25 15:12:37.720336] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:55.059 [2024-10-25 15:12:37.721071] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:55.059 [2024-10-25 15:12:37.721109] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:55.059 [2024-10-25 15:12:37.721310] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:55.059 00:07:55.059 [2024-10-25 15:12:37.721337] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:56.435 00:07:56.435 real 0m2.199s 00:07:56.435 user 0m1.821s 00:07:56.435 sys 0m0.267s 00:07:56.435 15:12:38 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.435 15:12:38 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:56.435 ************************************ 00:07:56.435 END TEST bdev_hello_world 00:07:56.435 ************************************ 00:07:56.435 15:12:38 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:56.435 15:12:38 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:56.435 15:12:38 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.435 15:12:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.435 ************************************ 00:07:56.435 START TEST bdev_bounds 00:07:56.435 ************************************ 00:07:56.435 15:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:56.435 15:12:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62563 00:07:56.435 15:12:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:56.435 15:12:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62563' 00:07:56.435 Process bdevio pid: 62563 00:07:56.435 15:12:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62563 00:07:56.435 15:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 62563 ']' 00:07:56.435 15:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.435 15:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.435 15:12:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:56.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.435 15:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.435 15:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.435 15:12:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:56.435 [2024-10-25 15:12:39.005335] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:07:56.435 [2024-10-25 15:12:39.005495] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62563 ] 00:07:56.722 [2024-10-25 15:12:39.196106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:56.722 [2024-10-25 15:12:39.321009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.722 [2024-10-25 15:12:39.321276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.722 [2024-10-25 15:12:39.321321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.306 15:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.306 15:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:57.306 15:12:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:57.565 I/O targets: 00:07:57.565 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:57.565 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:57.565 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:57.565 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:57.565 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:57.565 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:57.565 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:57.565 00:07:57.565 00:07:57.565 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.565 http://cunit.sourceforge.net/ 00:07:57.565 00:07:57.565 00:07:57.565 Suite: bdevio tests on: Nvme3n1 00:07:57.565 Test: blockdev write read block ...passed 00:07:57.565 Test: blockdev write zeroes read block ...passed 00:07:57.565 Test: blockdev write zeroes read no split ...passed 00:07:57.565 Test: blockdev write zeroes read split ...passed 00:07:57.565 Test: blockdev write zeroes read split partial ...passed 00:07:57.565 Test: blockdev reset ...[2024-10-25 15:12:40.196351] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:57.565 [2024-10-25 15:12:40.200824] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:57.565 passed 00:07:57.565 Test: blockdev write read 8 blocks ...passed 00:07:57.565 Test: blockdev write read size > 128k ...passed 00:07:57.565 Test: blockdev write read invalid size ...passed 00:07:57.565 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:57.565 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:57.565 Test: blockdev write read max offset ...passed 00:07:57.565 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:57.565 Test: blockdev writev readv 8 blocks ...passed 00:07:57.565 Test: blockdev writev readv 30 x 1block ...passed 00:07:57.565 Test: blockdev writev readv block ...passed 00:07:57.565 Test: blockdev writev readv size > 128k ...passed 00:07:57.565 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:57.565 Test: blockdev comparev and writev ...[2024-10-25 15:12:40.217194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9e04000 len:0x1000 00:07:57.565 [2024-10-25 15:12:40.217276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:57.565 passed 00:07:57.565 Test: blockdev nvme passthru rw ...passed 00:07:57.565 Test: blockdev nvme passthru vendor specific ...[2024-10-25 15:12:40.218415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:57.565 [2024-10-25 15:12:40.218458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:57.565 passed 00:07:57.565 Test: blockdev nvme admin passthru ...passed 00:07:57.565 Test: blockdev copy ...passed 00:07:57.565 Suite: bdevio tests on: Nvme2n3 00:07:57.565 Test: blockdev write read block ...passed 00:07:57.565 Test: blockdev write zeroes read block ...passed 00:07:57.565 Test: blockdev write zeroes read no split ...passed 00:07:57.565 Test: blockdev write zeroes read split ...passed 00:07:57.823 Test: blockdev write zeroes read split partial ...passed 00:07:57.823 Test: blockdev reset ...[2024-10-25 15:12:40.315137] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:57.823 [2024-10-25 15:12:40.321859] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:57.823 passed 00:07:57.823 Test: blockdev write read 8 blocks ...passed 00:07:57.823 Test: blockdev write read size > 128k ...passed 00:07:57.823 Test: blockdev write read invalid size ...passed 00:07:57.823 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:57.823 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:57.823 Test: blockdev write read max offset ...passed 00:07:57.823 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:57.823 Test: blockdev writev readv 8 blocks ...passed 00:07:57.823 Test: blockdev writev readv 30 x 1block ...passed 00:07:57.823 Test: blockdev writev readv block ...passed 00:07:57.823 Test: blockdev writev readv size > 128k ...passed 00:07:57.824 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:57.824 Test: blockdev comparev and writev ...[2024-10-25 15:12:40.331601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9e02000 len:0x1000 00:07:57.824 [2024-10-25 15:12:40.331698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:57.824 passed 00:07:57.824 Test: blockdev nvme passthru rw ...passed 00:07:57.824 Test: blockdev nvme passthru vendor specific ...[2024-10-25 15:12:40.332729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:57.824 [2024-10-25 15:12:40.332769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:57.824 passed 00:07:57.824 Test: blockdev nvme admin passthru ...passed 00:07:57.824 Test: blockdev copy ...passed 00:07:57.824 Suite: bdevio tests on: Nvme2n2 00:07:57.824 Test: blockdev write read block ...passed 00:07:57.824 Test: blockdev write zeroes read block ...passed 00:07:57.824 Test: blockdev write zeroes read no split ...passed 00:07:57.824 Test: blockdev write zeroes read split ...passed 00:07:57.824 Test: blockdev write zeroes read split partial ...passed 00:07:57.824 Test: blockdev reset ...[2024-10-25 15:12:40.413414] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:57.824 [2024-10-25 15:12:40.418902] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:57.824 passed 00:07:57.824 Test: blockdev write read 8 blocks ...passed 00:07:57.824 Test: blockdev write read size > 128k ...passed 00:07:57.824 Test: blockdev write read invalid size ...passed 00:07:57.824 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:57.824 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:57.824 Test: blockdev write read max offset ...passed 00:07:57.824 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:57.824 Test: blockdev writev readv 8 blocks ...passed 00:07:57.824 Test: blockdev writev readv 30 x 1block ...passed 00:07:57.824 Test: blockdev writev readv block ...passed 00:07:57.824 Test: blockdev writev readv size > 128k ...passed 00:07:57.824 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:57.824 Test: blockdev comparev and writev ...[2024-10-25 15:12:40.428920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc438000 len:0x1000 00:07:57.824 [2024-10-25 15:12:40.429012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:57.824 passed 00:07:57.824 Test: blockdev nvme passthru rw ...passed 00:07:57.824 Test: blockdev nvme passthru vendor specific ...[2024-10-25 15:12:40.429848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:57.824 [2024-10-25 15:12:40.429886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:57.824 passed 00:07:57.824 Test: blockdev nvme admin passthru ...passed 00:07:57.824 Test: blockdev copy ...passed 00:07:57.824 Suite: bdevio tests on: Nvme2n1 00:07:57.824 Test: blockdev write read block ...passed 00:07:57.824 Test: blockdev write zeroes read block ...passed 00:07:57.824 Test: blockdev write zeroes read no split ...passed 00:07:57.824 Test: blockdev write zeroes read split ...passed 00:07:57.824 Test: blockdev write zeroes read split partial ...passed 00:07:57.824 Test: blockdev reset ...[2024-10-25 15:12:40.514841] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:57.824 [2024-10-25 15:12:40.520280] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:57.824 passed 00:07:57.824 Test: blockdev write read 8 blocks ...passed 00:07:57.824 Test: blockdev write read size > 128k ...passed 00:07:57.824 Test: blockdev write read invalid size ...passed 00:07:57.824 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:57.824 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:57.824 Test: blockdev write read max offset ...passed 00:07:57.824 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:57.824 Test: blockdev writev readv 8 blocks ...passed 00:07:57.824 Test: blockdev writev readv 30 x 1block ...passed 00:07:57.824 Test: blockdev writev readv block ...passed 00:07:57.824 Test: blockdev writev readv size > 128k ...passed 00:07:57.824 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:57.824 Test: blockdev comparev and writev ...[2024-10-25 15:12:40.535003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc434000 len:0x1000 00:07:57.824 [2024-10-25 15:12:40.535117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:57.824 passed 00:07:57.824 Test: blockdev nvme passthru rw ...passed 00:07:57.824 Test: blockdev nvme passthru vendor specific ...[2024-10-25 15:12:40.536345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:57.824 [2024-10-25 15:12:40.536390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:57.824 passed 00:07:57.824 Test: blockdev nvme admin passthru ...passed 00:07:57.824 Test: blockdev copy ...passed 00:07:57.824 Suite: bdevio tests on: Nvme1n1p2 00:07:57.824 Test: blockdev write read block ...passed 00:07:57.824 Test: blockdev write zeroes read block ...passed 00:07:58.083 Test: blockdev write zeroes read no split ...passed 00:07:58.083 Test: blockdev write zeroes read split ...passed 00:07:58.083 Test: blockdev write zeroes read split partial ...passed 00:07:58.083 Test: blockdev reset ...[2024-10-25 15:12:40.639225] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:58.083 [2024-10-25 15:12:40.643664] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:58.083 passed 00:07:58.083 Test: blockdev write read 8 blocks ...passed 00:07:58.083 Test: blockdev write read size > 128k ...passed 00:07:58.083 Test: blockdev write read invalid size ...passed 00:07:58.083 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.083 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.083 Test: blockdev write read max offset ...passed 00:07:58.083 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.083 Test: blockdev writev readv 8 blocks ...passed 00:07:58.083 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.083 Test: blockdev writev readv block ...passed 00:07:58.083 Test: blockdev writev readv size > 128k ...passed 00:07:58.083 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.083 Test: blockdev comparev and writev ...[2024-10-25 15:12:40.652766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cc430000 len:0x1000 00:07:58.083 [2024-10-25 15:12:40.652843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:58.083 passed 00:07:58.083 Test: blockdev nvme passthru rw ...passed 00:07:58.083 Test: blockdev nvme passthru vendor specific ...passed 00:07:58.083 Test: blockdev nvme admin passthru ...passed 00:07:58.083 Test: blockdev copy ...passed 00:07:58.083 Suite: bdevio tests on: Nvme1n1p1 00:07:58.083 Test: blockdev write read block ...passed 00:07:58.083 Test: blockdev write zeroes read block ...passed 00:07:58.083 Test: blockdev write zeroes read no split ...passed 00:07:58.083 Test: blockdev write zeroes read split ...passed 00:07:58.083 Test: blockdev write zeroes read split partial ...passed 00:07:58.083 Test: blockdev reset ...[2024-10-25 15:12:40.728806] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:58.083 [2024-10-25 15:12:40.733535] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:58.083 passed 00:07:58.083 Test: blockdev write read 8 blocks ...passed 00:07:58.083 Test: blockdev write read size > 128k ...passed 00:07:58.083 Test: blockdev write read invalid size ...passed 00:07:58.083 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.083 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.083 Test: blockdev write read max offset ...passed 00:07:58.083 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.083 Test: blockdev writev readv 8 blocks ...passed 00:07:58.083 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.083 Test: blockdev writev readv block ...passed 00:07:58.083 Test: blockdev writev readv size > 128k ...passed 00:07:58.083 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.083 Test: blockdev comparev and writev ...[2024-10-25 15:12:40.743741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2ba00e000 len:0x1000 00:07:58.083 [2024-10-25 15:12:40.743816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:58.083 passed 00:07:58.083 Test: blockdev nvme passthru rw ...passed 00:07:58.083 Test: blockdev nvme passthru vendor specific ...passed 00:07:58.083 Test: blockdev nvme admin passthru ...passed 00:07:58.083 Test: blockdev copy ...passed 00:07:58.083 Suite: bdevio tests on: Nvme0n1 00:07:58.083 Test: blockdev write read block ...passed 00:07:58.083 Test: blockdev write zeroes read block ...passed 00:07:58.083 Test: blockdev write zeroes read no split ...passed 00:07:58.083 Test: blockdev write zeroes read split ...passed 00:07:58.342 Test: blockdev write zeroes read split partial ...passed 00:07:58.342 Test: blockdev reset ...[2024-10-25 15:12:40.822691] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:58.342 [2024-10-25 15:12:40.827635] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:58.342 passed 00:07:58.342 Test: blockdev write read 8 blocks ...passed 00:07:58.342 Test: blockdev write read size > 128k ...passed 00:07:58.342 Test: blockdev write read invalid size ...passed 00:07:58.342 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.342 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.342 Test: blockdev write read max offset ...passed 00:07:58.342 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.342 Test: blockdev writev readv 8 blocks ...passed 00:07:58.342 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.342 Test: blockdev writev readv block ...passed 00:07:58.342 Test: blockdev writev readv size > 128k ...passed 00:07:58.342 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.342 Test: blockdev comparev and writev ...[2024-10-25 15:12:40.836591] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:58.342 separate metadata which is not supported yet. 00:07:58.342 passed 00:07:58.342 Test: blockdev nvme passthru rw ...passed 00:07:58.342 Test: blockdev nvme passthru vendor specific ...[2024-10-25 15:12:40.837342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:58.342 [2024-10-25 15:12:40.837425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:58.342 passed 00:07:58.342 Test: blockdev nvme admin passthru ...passed 00:07:58.342 Test: blockdev copy ...passed 00:07:58.342 00:07:58.342 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.342 suites 7 7 n/a 0 0 00:07:58.342 tests 161 161 161 0 0 00:07:58.342 asserts 1025 1025 1025 0 n/a 00:07:58.342 00:07:58.342 Elapsed time = 1.969 seconds 00:07:58.342 0 00:07:58.342 15:12:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62563 00:07:58.342 15:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 62563 ']' 00:07:58.342 15:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 62563 00:07:58.342 15:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:58.342 15:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.342 15:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62563 00:07:58.342 15:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:58.342 15:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:58.342 killing process with pid 62563 00:07:58.342 15:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62563' 00:07:58.342 15:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 62563 00:07:58.342 15:12:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 62563 00:07:59.277 15:12:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:59.277 00:07:59.277 real 0m3.085s 00:07:59.277 user 0m7.841s 00:07:59.277 sys 0m0.486s 00:07:59.277 15:12:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.277 15:12:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:59.277 ************************************ 00:07:59.277 END TEST bdev_bounds 00:07:59.277 ************************************ 00:07:59.537 15:12:42 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:59.537 15:12:42 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:59.537 15:12:42 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.537 15:12:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:59.537 ************************************ 00:07:59.537 START TEST bdev_nbd 00:07:59.537 ************************************ 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62628 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62628 /var/tmp/spdk-nbd.sock 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 62628 ']' 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.537 15:12:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:59.537 [2024-10-25 15:12:42.167296] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:07:59.537 [2024-10-25 15:12:42.167430] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.797 [2024-10-25 15:12:42.352869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.797 [2024-10-25 15:12:42.467976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:00.732 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:00.733 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:00.733 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:00.733 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:00.733 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.733 1+0 records in 00:08:00.733 1+0 records out 00:08:00.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00185869 s, 2.2 MB/s 00:08:00.733 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.733 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:00.733 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.733 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:00.733 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:00.733 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.733 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:00.733 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:00.992 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:00.992 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:00.992 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:00.992 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:00.992 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:00.992 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:00.992 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:00.992 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:00.992 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:00.992 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:00.992 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:00.992 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:00.992 1+0 records in 00:08:00.992 1+0 records out 00:08:00.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000646909 s, 6.3 MB/s 00:08:00.992 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.992 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:00.992 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:00.992 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:00.992 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:00.992 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:00.992 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:00.992 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:01.250 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:01.250 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:01.250 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:01.250 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:08:01.250 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:01.250 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:01.250 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:01.250 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:08:01.250 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:01.250 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:01.250 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:01.250 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.250 1+0 records in 00:08:01.250 1+0 records out 00:08:01.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000887848 s, 4.6 MB/s 00:08:01.250 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.250 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:01.251 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.251 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:01.251 15:12:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:01.251 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:01.251 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:01.251 15:12:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:01.509 15:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:01.509 15:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:01.509 15:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:01.509 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:08:01.509 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:01.509 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:01.509 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:01.509 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:08:01.766 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:01.767 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:01.767 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:01.767 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.767 1+0 records in 00:08:01.767 1+0 records out 00:08:01.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000781907 s, 5.2 MB/s 00:08:01.767 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.767 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:01.767 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.767 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:01.767 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:01.767 15:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:01.767 15:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:01.767 15:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:01.767 15:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:01.767 15:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:02.024 15:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:02.024 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:08:02.024 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:02.024 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:02.024 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:02.024 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:08:02.024 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:02.024 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:02.024 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:02.024 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.024 1+0 records in 00:08:02.024 1+0 records out 00:08:02.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715294 s, 5.7 MB/s 00:08:02.024 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.024 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:02.024 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.024 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:02.024 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:02.024 15:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:02.024 15:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:02.024 15:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:02.282 15:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:02.282 15:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:02.282 15:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:02.282 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:08:02.282 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:02.282 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:02.282 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:02.282 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:08:02.282 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:02.282 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:02.282 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:02.282 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.282 1+0 records in 00:08:02.282 1+0 records out 00:08:02.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000879669 s, 4.7 MB/s 00:08:02.282 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.282 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:02.282 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.282 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:02.282 15:12:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:02.282 15:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:02.282 15:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:02.282 15:12:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:02.540 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:02.540 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:02.540 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:02.540 15:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:08:02.540 15:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:02.540 15:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:02.540 15:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:02.540 15:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:08:02.540 15:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:02.540 15:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:02.540 15:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:02.540 15:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.540 1+0 records in 00:08:02.540 1+0 records out 00:08:02.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000721751 s, 5.7 MB/s 00:08:02.540 15:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.540 15:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:02.540 15:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.540 15:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:02.540 15:12:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:02.540 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:02.540 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:02.540 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:02.798 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:02.798 { 00:08:02.798 "nbd_device": "/dev/nbd0", 00:08:02.798 "bdev_name": "Nvme0n1" 00:08:02.798 }, 00:08:02.798 { 00:08:02.798 "nbd_device": "/dev/nbd1", 00:08:02.798 "bdev_name": "Nvme1n1p1" 00:08:02.798 }, 00:08:02.798 { 00:08:02.798 "nbd_device": "/dev/nbd2", 00:08:02.798 "bdev_name": "Nvme1n1p2" 00:08:02.798 }, 00:08:02.798 { 00:08:02.798 "nbd_device": "/dev/nbd3", 00:08:02.798 "bdev_name": "Nvme2n1" 00:08:02.798 }, 00:08:02.798 { 00:08:02.798 "nbd_device": "/dev/nbd4", 00:08:02.798 "bdev_name": "Nvme2n2" 00:08:02.798 }, 00:08:02.798 { 00:08:02.798 "nbd_device": "/dev/nbd5", 00:08:02.798 "bdev_name": "Nvme2n3" 00:08:02.798 }, 00:08:02.798 { 00:08:02.798 "nbd_device": "/dev/nbd6", 00:08:02.798 "bdev_name": "Nvme3n1" 00:08:02.798 } 00:08:02.798 ]' 00:08:02.798 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:02.798 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:02.798 { 00:08:02.798 "nbd_device": "/dev/nbd0", 00:08:02.798 "bdev_name": "Nvme0n1" 00:08:02.798 }, 00:08:02.798 { 00:08:02.798 "nbd_device": "/dev/nbd1", 00:08:02.798 "bdev_name": "Nvme1n1p1" 00:08:02.798 }, 00:08:02.798 { 00:08:02.798 "nbd_device": "/dev/nbd2", 00:08:02.798 "bdev_name": "Nvme1n1p2" 00:08:02.798 }, 00:08:02.798 { 00:08:02.798 "nbd_device": "/dev/nbd3", 00:08:02.798 "bdev_name": "Nvme2n1" 00:08:02.798 }, 00:08:02.798 { 00:08:02.798 "nbd_device": "/dev/nbd4", 00:08:02.798 "bdev_name": "Nvme2n2" 00:08:02.798 }, 00:08:02.798 { 00:08:02.798 "nbd_device": "/dev/nbd5", 00:08:02.798 "bdev_name": "Nvme2n3" 00:08:02.798 }, 00:08:02.798 { 00:08:02.798 "nbd_device": "/dev/nbd6", 00:08:02.798 "bdev_name": "Nvme3n1" 00:08:02.798 } 00:08:02.798 ]' 00:08:02.798 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:02.798 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:02.798 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.798 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:02.798 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:02.798 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:02.798 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.798 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:03.056 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:03.056 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:03.056 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:03.056 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.056 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.056 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:03.056 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.056 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.056 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.056 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:03.056 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:03.056 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:03.056 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:03.056 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.056 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.056 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:03.056 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.056 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.056 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.056 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:03.313 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:03.313 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:03.313 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:03.313 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.313 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.313 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:03.313 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.313 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.313 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.313 15:12:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:03.570 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:03.570 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:03.570 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:03.570 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.570 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.570 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:03.570 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.570 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.570 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.570 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:03.828 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:03.828 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:03.828 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:03.828 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.828 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.828 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:03.828 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:03.828 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.828 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.828 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:04.086 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:04.086 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:04.086 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:04.086 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.086 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.086 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:04.086 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.086 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.087 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.087 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:04.345 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:04.345 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:04.345 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:04.345 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.345 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.345 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:04.345 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:04.345 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.345 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:04.345 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.345 15:12:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:04.603 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:04.604 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:04.604 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:04.604 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:04.604 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:04.604 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:04.604 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:04.862 /dev/nbd0 00:08:05.121 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:05.121 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:05.121 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:05.121 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.121 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.121 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.121 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:05.121 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.121 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.121 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.121 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.121 1+0 records in 00:08:05.121 1+0 records out 00:08:05.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115748 s, 3.5 MB/s 00:08:05.121 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.121 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.121 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.121 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.121 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.121 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.121 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:05.121 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:05.121 /dev/nbd1 00:08:05.380 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:05.380 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:05.380 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:05.380 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.380 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.380 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.380 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:05.380 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.380 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.380 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.380 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.380 1+0 records in 00:08:05.380 1+0 records out 00:08:05.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643609 s, 6.4 MB/s 00:08:05.380 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.380 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.380 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.380 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.380 15:12:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.380 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.380 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:05.380 15:12:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:05.380 /dev/nbd10 00:08:05.380 15:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:05.639 15:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:05.640 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:08:05.640 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.640 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.640 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.640 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:08:05.640 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.640 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.640 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.640 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.640 1+0 records in 00:08:05.640 1+0 records out 00:08:05.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000640921 s, 6.4 MB/s 00:08:05.640 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.640 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.640 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.640 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.640 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.640 15:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.640 15:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:05.640 15:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:05.897 /dev/nbd11 00:08:05.897 15:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:05.897 15:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:05.897 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:08:05.897 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.897 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.897 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.897 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:08:05.897 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.897 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.897 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.897 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.897 1+0 records in 00:08:05.897 1+0 records out 00:08:05.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000893693 s, 4.6 MB/s 00:08:05.898 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.898 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.898 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.898 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.898 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.898 15:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:05.898 15:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:05.898 15:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:06.155 /dev/nbd12 00:08:06.155 15:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:06.155 15:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:06.155 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:08:06.155 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:06.155 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:06.155 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:06.155 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:08:06.155 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:06.155 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:06.155 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:06.155 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.155 1+0 records in 00:08:06.155 1+0 records out 00:08:06.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651646 s, 6.3 MB/s 00:08:06.155 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.155 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:06.155 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.155 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:06.155 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:06.155 15:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.155 15:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:06.155 15:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:06.413 /dev/nbd13 00:08:06.413 15:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:06.413 15:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:06.413 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:08:06.413 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:06.413 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:06.413 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:06.413 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:08:06.413 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:06.413 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:06.413 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:06.413 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.413 1+0 records in 00:08:06.413 1+0 records out 00:08:06.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000711831 s, 5.8 MB/s 00:08:06.413 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.413 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:06.413 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.413 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:06.413 15:12:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:06.413 15:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.413 15:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:06.413 15:12:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:06.671 /dev/nbd14 00:08:06.671 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:06.671 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:06.671 15:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:08:06.671 15:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:06.671 15:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:06.671 15:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:06.671 15:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:08:06.671 15:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:06.671 15:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:06.671 15:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:06.671 15:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.671 1+0 records in 00:08:06.671 1+0 records out 00:08:06.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102406 s, 4.0 MB/s 00:08:06.672 15:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.672 15:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:06.672 15:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.672 15:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:06.672 15:12:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:06.672 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.672 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:06.672 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:06.672 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.672 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.930 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:06.930 { 00:08:06.930 "nbd_device": "/dev/nbd0", 00:08:06.930 "bdev_name": "Nvme0n1" 00:08:06.930 }, 00:08:06.930 { 00:08:06.930 "nbd_device": "/dev/nbd1", 00:08:06.930 "bdev_name": "Nvme1n1p1" 00:08:06.930 }, 00:08:06.930 { 00:08:06.930 "nbd_device": "/dev/nbd10", 00:08:06.930 "bdev_name": "Nvme1n1p2" 00:08:06.930 }, 00:08:06.930 { 00:08:06.930 "nbd_device": "/dev/nbd11", 00:08:06.930 "bdev_name": "Nvme2n1" 00:08:06.930 }, 00:08:06.930 { 00:08:06.930 "nbd_device": "/dev/nbd12", 00:08:06.930 "bdev_name": "Nvme2n2" 00:08:06.930 }, 00:08:06.930 { 00:08:06.930 "nbd_device": "/dev/nbd13", 00:08:06.930 "bdev_name": "Nvme2n3" 00:08:06.930 }, 00:08:06.930 { 00:08:06.930 "nbd_device": "/dev/nbd14", 00:08:06.930 "bdev_name": "Nvme3n1" 00:08:06.930 } 00:08:06.930 ]' 00:08:06.930 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:06.930 { 00:08:06.930 "nbd_device": "/dev/nbd0", 00:08:06.930 "bdev_name": "Nvme0n1" 00:08:06.930 }, 00:08:06.930 { 00:08:06.930 "nbd_device": "/dev/nbd1", 00:08:06.930 "bdev_name": "Nvme1n1p1" 00:08:06.930 }, 00:08:06.930 { 00:08:06.930 "nbd_device": "/dev/nbd10", 00:08:06.930 "bdev_name": "Nvme1n1p2" 00:08:06.930 }, 00:08:06.930 { 00:08:06.930 "nbd_device": "/dev/nbd11", 00:08:06.930 "bdev_name": "Nvme2n1" 00:08:06.930 }, 00:08:06.930 { 00:08:06.930 "nbd_device": "/dev/nbd12", 00:08:06.930 "bdev_name": "Nvme2n2" 00:08:06.930 }, 00:08:06.930 { 00:08:06.930 "nbd_device": "/dev/nbd13", 00:08:06.930 "bdev_name": "Nvme2n3" 00:08:06.930 }, 00:08:06.930 { 00:08:06.930 "nbd_device": "/dev/nbd14", 00:08:06.930 "bdev_name": "Nvme3n1" 00:08:06.930 } 00:08:06.930 ]' 00:08:06.930 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.930 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:06.930 /dev/nbd1 00:08:06.930 /dev/nbd10 00:08:06.930 /dev/nbd11 00:08:06.930 /dev/nbd12 00:08:06.930 /dev/nbd13 00:08:06.930 /dev/nbd14' 00:08:06.930 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.930 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:06.930 /dev/nbd1 00:08:06.930 /dev/nbd10 00:08:06.930 /dev/nbd11 00:08:06.930 /dev/nbd12 00:08:06.930 /dev/nbd13 00:08:06.930 /dev/nbd14' 00:08:06.930 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:06.930 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:06.930 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:06.930 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:06.930 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:06.930 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:06.930 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:06.930 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:06.930 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:06.930 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:06.930 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:06.930 256+0 records in 00:08:06.930 256+0 records out 00:08:06.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011589 s, 90.5 MB/s 00:08:06.930 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.931 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:07.188 256+0 records in 00:08:07.188 256+0 records out 00:08:07.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155427 s, 6.7 MB/s 00:08:07.188 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.188 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:07.446 256+0 records in 00:08:07.446 256+0 records out 00:08:07.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186923 s, 5.6 MB/s 00:08:07.446 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.446 15:12:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:07.446 256+0 records in 00:08:07.446 256+0 records out 00:08:07.446 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152876 s, 6.9 MB/s 00:08:07.446 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.446 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:07.703 256+0 records in 00:08:07.703 256+0 records out 00:08:07.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157188 s, 6.7 MB/s 00:08:07.703 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.703 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:08.001 256+0 records in 00:08:08.001 256+0 records out 00:08:08.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15549 s, 6.7 MB/s 00:08:08.001 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.001 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:08.001 256+0 records in 00:08:08.001 256+0 records out 00:08:08.001 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158336 s, 6.6 MB/s 00:08:08.001 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:08.001 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:08.262 256+0 records in 00:08:08.262 256+0 records out 00:08:08.262 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15942 s, 6.6 MB/s 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.262 15:12:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:08.521 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:08.521 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:08.521 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:08.521 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.521 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.521 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:08.521 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.521 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.521 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.521 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:08.779 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:08.779 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:08.779 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:08.779 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.779 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.779 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:08.779 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.779 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.779 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.779 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:09.038 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:09.038 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:09.038 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:09.038 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.038 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.038 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:09.038 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:09.038 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.038 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.038 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:09.296 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:09.296 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:09.296 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:09.296 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.296 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.296 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:09.296 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:09.296 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.296 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.296 15:12:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:09.555 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:09.555 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:09.555 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:09.555 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.555 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.555 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:09.555 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:09.555 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.555 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.555 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:09.813 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:09.813 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:09.813 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:09.813 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.813 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.813 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:09.813 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:09.813 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.813 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.813 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:10.073 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:10.073 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:10.073 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:10.073 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.073 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.073 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:10.073 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.073 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.073 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:10.073 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.073 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:10.073 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:10.073 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:10.073 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:10.332 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:10.332 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:10.332 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:10.332 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:10.332 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:10.332 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:10.332 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:10.332 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:10.332 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:10.332 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:10.332 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.332 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:10.332 15:12:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:10.591 malloc_lvol_verify 00:08:10.591 15:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:10.591 fab32296-e2c8-4990-a707-9ee22b6b5653 00:08:10.850 15:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:10.850 21e5ba7a-c71b-484a-a7d2-fd0a222089ef 00:08:10.850 15:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:11.107 /dev/nbd0 00:08:11.107 15:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:11.107 15:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:11.107 15:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:11.107 15:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:11.107 15:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:11.107 mke2fs 1.47.0 (5-Feb-2023) 00:08:11.107 Discarding device blocks: 0/4096 done 00:08:11.107 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:11.107 00:08:11.107 Allocating group tables: 0/1 done 00:08:11.107 Writing inode tables: 0/1 done 00:08:11.107 Creating journal (1024 blocks): done 00:08:11.107 Writing superblocks and filesystem accounting information: 0/1 done 00:08:11.107 00:08:11.107 15:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:11.107 15:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.107 15:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:11.107 15:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:11.107 15:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:11.107 15:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.107 15:12:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:11.366 15:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:11.366 15:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:11.366 15:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:11.366 15:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:11.366 15:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:11.366 15:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:11.366 15:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:11.366 15:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:11.366 15:12:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62628 00:08:11.366 15:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 62628 ']' 00:08:11.366 15:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 62628 00:08:11.366 15:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:08:11.366 15:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:11.366 15:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62628 00:08:11.366 15:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:11.366 killing process with pid 62628 00:08:11.366 15:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:11.366 15:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62628' 00:08:11.366 15:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 62628 00:08:11.366 15:12:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 62628 00:08:12.749 15:12:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:12.749 00:08:12.749 real 0m13.325s 00:08:12.749 user 0m17.214s 00:08:12.749 sys 0m5.564s 00:08:12.749 15:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.749 15:12:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:12.749 ************************************ 00:08:12.749 END TEST bdev_nbd 00:08:12.749 ************************************ 00:08:12.749 15:12:55 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:12.749 15:12:55 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:08:12.749 15:12:55 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:08:12.749 skipping fio tests on NVMe due to multi-ns failures. 00:08:12.749 15:12:55 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:12.749 15:12:55 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:12.749 15:12:55 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:12.749 15:12:55 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:12.749 15:12:55 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.749 15:12:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:12.749 ************************************ 00:08:12.749 START TEST bdev_verify 00:08:12.749 ************************************ 00:08:12.749 15:12:55 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:13.009 [2024-10-25 15:12:55.548877] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:08:13.009 [2024-10-25 15:12:55.549012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63069 ] 00:08:13.267 [2024-10-25 15:12:55.743969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:13.267 [2024-10-25 15:12:55.862607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.267 [2024-10-25 15:12:55.862639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.203 Running I/O for 5 seconds... 00:08:16.074 23296.00 IOPS, 91.00 MiB/s [2024-10-25T15:12:59.752Z] 22208.00 IOPS, 86.75 MiB/s [2024-10-25T15:13:01.144Z] 21205.33 IOPS, 82.83 MiB/s [2024-10-25T15:13:02.083Z] 21664.00 IOPS, 84.62 MiB/s [2024-10-25T15:13:02.083Z] 21440.00 IOPS, 83.75 MiB/s 00:08:19.355 Latency(us) 00:08:19.355 [2024-10-25T15:13:02.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.355 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:19.355 Verification LBA range: start 0x0 length 0xbd0bd 00:08:19.355 Nvme0n1 : 5.09 1497.70 5.85 0.00 0.00 84979.32 18107.94 99804.22 00:08:19.355 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:19.355 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:19.355 Nvme0n1 : 5.08 1523.14 5.95 0.00 0.00 83545.00 17370.99 98540.88 00:08:19.355 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:19.355 Verification LBA range: start 0x0 length 0x4ff80 00:08:19.355 Nvme1n1p1 : 5.09 1497.24 5.85 0.00 0.00 84862.03 14423.18 94329.73 00:08:19.355 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:19.355 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:19.355 Nvme1n1p1 : 5.09 1521.97 5.95 0.00 0.00 83449.01 19687.12 92224.15 00:08:19.355 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:19.355 Verification LBA range: start 0x0 length 0x4ff7f 00:08:19.355 Nvme1n1p2 : 5.10 1504.88 5.88 0.00 0.00 84470.44 12370.25 81696.28 00:08:19.355 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:19.355 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:19.355 Nvme1n1p2 : 5.11 1529.07 5.97 0.00 0.00 83139.24 12686.09 80854.05 00:08:19.355 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:19.355 Verification LBA range: start 0x0 length 0x80000 00:08:19.355 Nvme2n1 : 5.10 1504.46 5.88 0.00 0.00 84332.90 12686.09 72852.87 00:08:19.355 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:19.355 Verification LBA range: start 0x80000 length 0x80000 00:08:19.355 Nvme2n1 : 5.11 1528.67 5.97 0.00 0.00 83003.51 11896.49 71589.53 00:08:19.355 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:19.355 Verification LBA range: start 0x0 length 0x80000 00:08:19.355 Nvme2n2 : 5.11 1503.68 5.87 0.00 0.00 84205.48 13896.79 70326.18 00:08:19.355 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:19.355 Verification LBA range: start 0x80000 length 0x80000 00:08:19.355 Nvme2n2 : 5.11 1527.63 5.97 0.00 0.00 82885.14 13580.95 64430.57 00:08:19.355 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:19.355 Verification LBA range: start 0x0 length 0x80000 00:08:19.355 Nvme2n3 : 5.11 1503.08 5.87 0.00 0.00 84074.76 15054.86 73273.99 00:08:19.355 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:19.355 Verification LBA range: start 0x80000 length 0x80000 00:08:19.355 Nvme2n3 : 5.11 1527.27 5.97 0.00 0.00 82746.00 13107.20 64009.46 00:08:19.355 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:19.355 Verification LBA range: start 0x0 length 0x20000 00:08:19.355 Nvme3n1 : 5.11 1502.76 5.87 0.00 0.00 83937.33 14107.35 74958.44 00:08:19.355 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:19.355 Verification LBA range: start 0x20000 length 0x20000 00:08:19.355 Nvme3n1 : 5.11 1526.93 5.96 0.00 0.00 82609.88 12844.00 67378.38 00:08:19.355 [2024-10-25T15:13:02.083Z] =================================================================================================================== 00:08:19.355 [2024-10-25T15:13:02.083Z] Total : 21198.48 82.81 0.00 0.00 83724.69 11896.49 99804.22 00:08:20.734 00:08:20.734 real 0m7.763s 00:08:20.734 user 0m14.333s 00:08:20.734 sys 0m0.318s 00:08:20.734 ************************************ 00:08:20.734 END TEST bdev_verify 00:08:20.734 ************************************ 00:08:20.734 15:13:03 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.734 15:13:03 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:20.734 15:13:03 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:20.734 15:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:20.734 15:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.734 15:13:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.734 ************************************ 00:08:20.734 START TEST bdev_verify_big_io 00:08:20.734 ************************************ 00:08:20.734 15:13:03 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:20.734 [2024-10-25 15:13:03.361169] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:08:20.734 [2024-10-25 15:13:03.361995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63167 ] 00:08:20.993 [2024-10-25 15:13:03.542218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:20.993 [2024-10-25 15:13:03.662447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.993 [2024-10-25 15:13:03.662480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.963 Running I/O for 5 seconds... 00:08:25.271 1566.00 IOPS, 97.88 MiB/s [2024-10-25T15:13:09.905Z] 1747.00 IOPS, 109.19 MiB/s [2024-10-25T15:13:10.473Z] 2152.67 IOPS, 134.54 MiB/s [2024-10-25T15:13:10.473Z] 2770.25 IOPS, 173.14 MiB/s 00:08:27.745 Latency(us) 00:08:27.746 [2024-10-25T15:13:10.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.746 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:27.746 Verification LBA range: start 0x0 length 0xbd0b 00:08:27.746 Nvme0n1 : 5.73 134.09 8.38 0.00 0.00 918592.55 12949.28 929821.61 00:08:27.746 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:27.746 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:27.746 Nvme0n1 : 5.73 137.63 8.60 0.00 0.00 891864.10 24951.06 929821.61 00:08:27.746 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:27.746 Verification LBA range: start 0x0 length 0x4ff8 00:08:27.746 Nvme1n1p1 : 5.67 132.05 8.25 0.00 0.00 913739.97 77906.25 1347567.55 00:08:27.746 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:27.746 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:27.746 Nvme1n1p1 : 5.77 131.99 8.25 0.00 0.00 919072.68 68641.72 1165645.93 00:08:27.746 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:27.746 Verification LBA range: start 0x0 length 0x4ff7 00:08:27.746 Nvme1n1p2 : 5.79 137.05 8.57 0.00 0.00 863670.78 63167.23 1374518.90 00:08:27.746 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:27.746 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:27.746 Nvme1n1p2 : 5.78 139.71 8.73 0.00 0.00 847795.01 66957.26 990462.15 00:08:27.746 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:27.746 Verification LBA range: start 0x0 length 0x8000 00:08:27.746 Nvme2n1 : 5.80 136.64 8.54 0.00 0.00 842685.66 63588.34 1387994.58 00:08:27.746 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:27.746 Verification LBA range: start 0x8000 length 0x8000 00:08:27.746 Nvme2n1 : 5.80 150.62 9.41 0.00 0.00 775597.69 38321.45 778220.26 00:08:27.746 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:27.746 Verification LBA range: start 0x0 length 0x8000 00:08:27.746 Nvme2n2 : 5.81 140.91 8.81 0.00 0.00 802290.14 54323.82 1408208.09 00:08:27.746 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:27.746 Verification LBA range: start 0x8000 length 0x8000 00:08:27.746 Nvme2n2 : 5.80 150.24 9.39 0.00 0.00 758178.67 38110.89 795064.85 00:08:27.746 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:27.746 Verification LBA range: start 0x0 length 0x8000 00:08:27.746 Nvme2n3 : 5.83 150.76 9.42 0.00 0.00 735069.46 15475.97 1435159.44 00:08:27.746 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:27.746 Verification LBA range: start 0x8000 length 0x8000 00:08:27.746 Nvme2n3 : 5.81 154.74 9.67 0.00 0.00 721401.08 24424.66 835491.88 00:08:27.746 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:27.746 Verification LBA range: start 0x0 length 0x2000 00:08:27.746 Nvme3n1 : 5.88 171.52 10.72 0.00 0.00 631631.52 1197.55 1455372.95 00:08:27.746 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:27.746 Verification LBA range: start 0x2000 length 0x2000 00:08:27.746 Nvme3n1 : 5.82 164.73 10.30 0.00 0.00 663033.57 3224.16 848967.56 00:08:27.746 [2024-10-25T15:13:10.474Z] =================================================================================================================== 00:08:27.746 [2024-10-25T15:13:10.474Z] Total : 2032.69 127.04 0.00 0.00 798126.87 1197.55 1455372.95 00:08:30.276 ************************************ 00:08:30.276 END TEST bdev_verify_big_io 00:08:30.276 ************************************ 00:08:30.276 00:08:30.276 real 0m9.153s 00:08:30.276 user 0m17.098s 00:08:30.276 sys 0m0.350s 00:08:30.276 15:13:12 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.276 15:13:12 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:30.276 15:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:30.276 15:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:30.276 15:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.276 15:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:30.276 ************************************ 00:08:30.276 START TEST bdev_write_zeroes 00:08:30.276 ************************************ 00:08:30.276 15:13:12 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:30.276 [2024-10-25 15:13:12.626488] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:08:30.276 [2024-10-25 15:13:12.626642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63283 ] 00:08:30.276 [2024-10-25 15:13:12.813113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.276 [2024-10-25 15:13:12.936523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.212 Running I/O for 1 seconds... 00:08:32.148 66752.00 IOPS, 260.75 MiB/s 00:08:32.148 Latency(us) 00:08:32.148 [2024-10-25T15:13:14.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.148 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:32.148 Nvme0n1 : 1.02 9499.78 37.11 0.00 0.00 13434.90 10633.15 47796.54 00:08:32.148 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:32.148 Nvme1n1p1 : 1.03 9489.17 37.07 0.00 0.00 13430.01 11054.27 48217.65 00:08:32.148 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:32.148 Nvme1n1p2 : 1.03 9479.01 37.03 0.00 0.00 13397.78 10633.15 48849.32 00:08:32.148 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:32.148 Nvme2n1 : 1.03 9469.40 36.99 0.00 0.00 13336.79 10633.15 48849.32 00:08:32.148 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:32.148 Nvme2n2 : 1.03 9513.39 37.16 0.00 0.00 13268.83 7369.51 48849.32 00:08:32.148 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:32.148 Nvme2n3 : 1.03 9504.72 37.13 0.00 0.00 13236.94 7580.07 49691.55 00:08:32.148 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:32.148 Nvme3n1 : 1.03 9495.63 37.09 0.00 0.00 13215.58 7737.99 47375.42 00:08:32.148 [2024-10-25T15:13:14.876Z] =================================================================================================================== 00:08:32.148 [2024-10-25T15:13:14.876Z] Total : 66451.10 259.57 0.00 0.00 13331.29 7369.51 49691.55 00:08:33.528 00:08:33.528 real 0m3.424s 00:08:33.528 user 0m3.021s 00:08:33.528 sys 0m0.283s 00:08:33.528 15:13:15 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.528 15:13:15 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:33.528 ************************************ 00:08:33.528 END TEST bdev_write_zeroes 00:08:33.528 ************************************ 00:08:33.528 15:13:15 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:33.528 15:13:15 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:33.528 15:13:15 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.528 15:13:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:33.528 ************************************ 00:08:33.528 START TEST bdev_json_nonenclosed 00:08:33.528 ************************************ 00:08:33.528 15:13:16 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:33.528 [2024-10-25 15:13:16.121931] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:08:33.528 [2024-10-25 15:13:16.122100] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63340 ] 00:08:33.787 [2024-10-25 15:13:16.320218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.787 [2024-10-25 15:13:16.446736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.787 [2024-10-25 15:13:16.446832] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:33.787 [2024-10-25 15:13:16.446855] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:33.787 [2024-10-25 15:13:16.446868] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.045 00:08:34.045 real 0m0.718s 00:08:34.045 user 0m0.447s 00:08:34.045 sys 0m0.166s 00:08:34.045 15:13:16 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.045 ************************************ 00:08:34.045 END TEST bdev_json_nonenclosed 00:08:34.045 ************************************ 00:08:34.045 15:13:16 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:34.305 15:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:34.305 15:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:34.305 15:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.305 15:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:34.305 ************************************ 00:08:34.305 START TEST bdev_json_nonarray 00:08:34.305 ************************************ 00:08:34.305 15:13:16 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:34.305 [2024-10-25 15:13:16.895262] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:08:34.305 [2024-10-25 15:13:16.895397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63366 ] 00:08:34.563 [2024-10-25 15:13:17.078503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.563 [2024-10-25 15:13:17.206393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.563 [2024-10-25 15:13:17.206500] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:34.563 [2024-10-25 15:13:17.206525] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:34.563 [2024-10-25 15:13:17.206538] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.822 ************************************ 00:08:34.822 END TEST bdev_json_nonarray 00:08:34.822 ************************************ 00:08:34.822 00:08:34.822 real 0m0.690s 00:08:34.822 user 0m0.437s 00:08:34.822 sys 0m0.148s 00:08:34.822 15:13:17 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.822 15:13:17 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:34.822 15:13:17 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:34.822 15:13:17 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:34.822 15:13:17 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:34.822 15:13:17 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.822 15:13:17 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.822 15:13:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:35.080 ************************************ 00:08:35.080 START TEST bdev_gpt_uuid 00:08:35.080 ************************************ 00:08:35.080 15:13:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:08:35.080 15:13:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:35.080 15:13:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:35.080 15:13:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63397 00:08:35.080 15:13:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:35.080 15:13:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:35.080 15:13:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63397 00:08:35.080 15:13:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 63397 ']' 00:08:35.080 15:13:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.080 15:13:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.080 15:13:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.080 15:13:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.080 15:13:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:35.080 [2024-10-25 15:13:17.669721] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:08:35.080 [2024-10-25 15:13:17.670028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63397 ] 00:08:35.338 [2024-10-25 15:13:17.857168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.338 [2024-10-25 15:13:17.984461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.274 15:13:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:36.274 15:13:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:08:36.274 15:13:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:36.274 15:13:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.274 15:13:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:36.532 Some configs were skipped because the RPC state that can call them passed over. 00:08:36.532 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.532 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:36.532 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.532 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:36.791 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.791 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:36.791 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.791 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:36.791 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.791 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:36.791 { 00:08:36.791 "name": "Nvme1n1p1", 00:08:36.791 "aliases": [ 00:08:36.791 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:36.791 ], 00:08:36.791 "product_name": "GPT Disk", 00:08:36.791 "block_size": 4096, 00:08:36.791 "num_blocks": 655104, 00:08:36.791 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:36.791 "assigned_rate_limits": { 00:08:36.791 "rw_ios_per_sec": 0, 00:08:36.791 "rw_mbytes_per_sec": 0, 00:08:36.791 "r_mbytes_per_sec": 0, 00:08:36.791 "w_mbytes_per_sec": 0 00:08:36.791 }, 00:08:36.791 "claimed": false, 00:08:36.791 "zoned": false, 00:08:36.791 "supported_io_types": { 00:08:36.791 "read": true, 00:08:36.791 "write": true, 00:08:36.791 "unmap": true, 00:08:36.791 "flush": true, 00:08:36.791 "reset": true, 00:08:36.791 "nvme_admin": false, 00:08:36.791 "nvme_io": false, 00:08:36.791 "nvme_io_md": false, 00:08:36.791 "write_zeroes": true, 00:08:36.791 "zcopy": false, 00:08:36.791 "get_zone_info": false, 00:08:36.791 "zone_management": false, 00:08:36.791 "zone_append": false, 00:08:36.791 "compare": true, 00:08:36.791 "compare_and_write": false, 00:08:36.791 "abort": true, 00:08:36.791 "seek_hole": false, 00:08:36.791 "seek_data": false, 00:08:36.791 "copy": true, 00:08:36.792 "nvme_iov_md": false 00:08:36.792 }, 00:08:36.792 "driver_specific": { 00:08:36.792 "gpt": { 00:08:36.792 "base_bdev": "Nvme1n1", 00:08:36.792 "offset_blocks": 256, 00:08:36.792 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:36.792 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:36.792 "partition_name": "SPDK_TEST_first" 00:08:36.792 } 00:08:36.792 } 00:08:36.792 } 00:08:36.792 ]' 00:08:36.792 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:36.792 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:36.792 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:36.792 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:36.792 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:36.792 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:36.792 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:36.792 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.792 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:36.792 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.792 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:36.792 { 00:08:36.792 "name": "Nvme1n1p2", 00:08:36.792 "aliases": [ 00:08:36.792 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:36.792 ], 00:08:36.792 "product_name": "GPT Disk", 00:08:36.792 "block_size": 4096, 00:08:36.792 "num_blocks": 655103, 00:08:36.792 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:36.792 "assigned_rate_limits": { 00:08:36.792 "rw_ios_per_sec": 0, 00:08:36.792 "rw_mbytes_per_sec": 0, 00:08:36.792 "r_mbytes_per_sec": 0, 00:08:36.792 "w_mbytes_per_sec": 0 00:08:36.792 }, 00:08:36.792 "claimed": false, 00:08:36.792 "zoned": false, 00:08:36.792 "supported_io_types": { 00:08:36.792 "read": true, 00:08:36.792 "write": true, 00:08:36.792 "unmap": true, 00:08:36.792 "flush": true, 00:08:36.792 "reset": true, 00:08:36.792 "nvme_admin": false, 00:08:36.792 "nvme_io": false, 00:08:36.792 "nvme_io_md": false, 00:08:36.792 "write_zeroes": true, 00:08:36.792 "zcopy": false, 00:08:36.792 "get_zone_info": false, 00:08:36.792 "zone_management": false, 00:08:36.792 "zone_append": false, 00:08:36.792 "compare": true, 00:08:36.792 "compare_and_write": false, 00:08:36.792 "abort": true, 00:08:36.792 "seek_hole": false, 00:08:36.792 "seek_data": false, 00:08:36.792 "copy": true, 00:08:36.792 "nvme_iov_md": false 00:08:36.792 }, 00:08:36.792 "driver_specific": { 00:08:36.792 "gpt": { 00:08:36.792 "base_bdev": "Nvme1n1", 00:08:36.792 "offset_blocks": 655360, 00:08:36.792 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:36.792 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:36.792 "partition_name": "SPDK_TEST_second" 00:08:36.792 } 00:08:36.792 } 00:08:36.792 } 00:08:36.792 ]' 00:08:36.792 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:36.792 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:36.792 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:37.051 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:37.051 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:37.051 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:37.051 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63397 00:08:37.051 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 63397 ']' 00:08:37.051 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 63397 00:08:37.051 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:08:37.051 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:37.051 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63397 00:08:37.051 killing process with pid 63397 00:08:37.051 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:37.051 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:37.051 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63397' 00:08:37.051 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 63397 00:08:37.051 15:13:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 63397 00:08:39.584 00:08:39.584 real 0m4.660s 00:08:39.584 user 0m4.847s 00:08:39.584 sys 0m0.565s 00:08:39.584 ************************************ 00:08:39.584 END TEST bdev_gpt_uuid 00:08:39.584 15:13:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.584 15:13:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:39.584 ************************************ 00:08:39.584 15:13:22 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:39.584 15:13:22 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:39.584 15:13:22 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:39.584 15:13:22 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:39.584 15:13:22 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:39.584 15:13:22 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:39.584 15:13:22 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:39.584 15:13:22 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:39.584 15:13:22 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:40.152 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:40.411 Waiting for block devices as requested 00:08:40.411 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:40.684 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:40.684 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:40.684 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:45.957 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:45.957 15:13:28 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:45.957 15:13:28 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:46.216 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:46.216 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:46.216 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:46.216 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:46.216 15:13:28 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:46.216 ************************************ 00:08:46.216 END TEST blockdev_nvme_gpt 00:08:46.216 ************************************ 00:08:46.216 00:08:46.216 real 1m7.173s 00:08:46.216 user 1m23.491s 00:08:46.216 sys 0m12.698s 00:08:46.216 15:13:28 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.216 15:13:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:46.216 15:13:28 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:46.216 15:13:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:46.216 15:13:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.216 15:13:28 -- common/autotest_common.sh@10 -- # set +x 00:08:46.216 ************************************ 00:08:46.216 START TEST nvme 00:08:46.216 ************************************ 00:08:46.216 15:13:28 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:46.475 * Looking for test storage... 00:08:46.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:46.475 15:13:28 nvme -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:46.475 15:13:28 nvme -- common/autotest_common.sh@1689 -- # lcov --version 00:08:46.475 15:13:28 nvme -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:46.475 15:13:29 nvme -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:46.475 15:13:29 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.475 15:13:29 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.475 15:13:29 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.475 15:13:29 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.475 15:13:29 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.475 15:13:29 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.475 15:13:29 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.475 15:13:29 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.475 15:13:29 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.475 15:13:29 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.475 15:13:29 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.475 15:13:29 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:46.475 15:13:29 nvme -- scripts/common.sh@345 -- # : 1 00:08:46.475 15:13:29 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.475 15:13:29 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.475 15:13:29 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:46.475 15:13:29 nvme -- scripts/common.sh@353 -- # local d=1 00:08:46.475 15:13:29 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.475 15:13:29 nvme -- scripts/common.sh@355 -- # echo 1 00:08:46.475 15:13:29 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.475 15:13:29 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:46.475 15:13:29 nvme -- scripts/common.sh@353 -- # local d=2 00:08:46.475 15:13:29 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.475 15:13:29 nvme -- scripts/common.sh@355 -- # echo 2 00:08:46.475 15:13:29 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.475 15:13:29 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.475 15:13:29 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.475 15:13:29 nvme -- scripts/common.sh@368 -- # return 0 00:08:46.475 15:13:29 nvme -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.475 15:13:29 nvme -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:46.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.475 --rc genhtml_branch_coverage=1 00:08:46.475 --rc genhtml_function_coverage=1 00:08:46.475 --rc genhtml_legend=1 00:08:46.475 --rc geninfo_all_blocks=1 00:08:46.475 --rc geninfo_unexecuted_blocks=1 00:08:46.475 00:08:46.475 ' 00:08:46.475 15:13:29 nvme -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:46.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.475 --rc genhtml_branch_coverage=1 00:08:46.475 --rc genhtml_function_coverage=1 00:08:46.475 --rc genhtml_legend=1 00:08:46.475 --rc geninfo_all_blocks=1 00:08:46.475 --rc geninfo_unexecuted_blocks=1 00:08:46.475 00:08:46.475 ' 00:08:46.475 15:13:29 nvme -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:46.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.475 --rc genhtml_branch_coverage=1 00:08:46.475 --rc genhtml_function_coverage=1 00:08:46.475 --rc genhtml_legend=1 00:08:46.475 --rc geninfo_all_blocks=1 00:08:46.475 --rc geninfo_unexecuted_blocks=1 00:08:46.475 00:08:46.475 ' 00:08:46.475 15:13:29 nvme -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:46.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.475 --rc genhtml_branch_coverage=1 00:08:46.475 --rc genhtml_function_coverage=1 00:08:46.475 --rc genhtml_legend=1 00:08:46.475 --rc geninfo_all_blocks=1 00:08:46.475 --rc geninfo_unexecuted_blocks=1 00:08:46.475 00:08:46.475 ' 00:08:46.475 15:13:29 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:47.044 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:48.001 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:48.001 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:48.001 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:48.001 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:48.001 15:13:30 nvme -- nvme/nvme.sh@79 -- # uname 00:08:48.001 15:13:30 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:48.001 15:13:30 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:48.001 15:13:30 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:48.001 15:13:30 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:48.001 15:13:30 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:08:48.001 15:13:30 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:08:48.001 15:13:30 nvme -- common/autotest_common.sh@1071 -- # stubpid=64058 00:08:48.001 15:13:30 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:48.001 15:13:30 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:08:48.001 Waiting for stub to ready for secondary processes... 00:08:48.001 15:13:30 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:48.001 15:13:30 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/64058 ]] 00:08:48.001 15:13:30 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:48.300 [2024-10-25 15:13:30.743443] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:08:48.300 [2024-10-25 15:13:30.743577] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:49.240 15:13:31 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:49.240 15:13:31 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/64058 ]] 00:08:49.240 15:13:31 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:49.240 [2024-10-25 15:13:31.814837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:49.500 [2024-10-25 15:13:31.983964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.500 [2024-10-25 15:13:31.984087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.500 [2024-10-25 15:13:31.984056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.500 [2024-10-25 15:13:32.005925] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:49.500 [2024-10-25 15:13:32.006150] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:49.500 [2024-10-25 15:13:32.023062] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:49.500 [2024-10-25 15:13:32.023394] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:49.500 [2024-10-25 15:13:32.026632] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:49.500 [2024-10-25 15:13:32.027222] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:49.500 [2024-10-25 15:13:32.027394] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:49.500 [2024-10-25 15:13:32.031216] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:49.500 [2024-10-25 15:13:32.031526] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:49.500 [2024-10-25 15:13:32.031631] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:49.500 [2024-10-25 15:13:32.035166] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:49.500 [2024-10-25 15:13:32.035411] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:49.500 [2024-10-25 15:13:32.035491] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:49.500 [2024-10-25 15:13:32.035554] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:49.500 [2024-10-25 15:13:32.035611] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:50.070 done. 00:08:50.070 15:13:32 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:50.070 15:13:32 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:08:50.070 15:13:32 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:50.070 15:13:32 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:50.070 15:13:32 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.070 15:13:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.070 ************************************ 00:08:50.070 START TEST nvme_reset 00:08:50.070 ************************************ 00:08:50.070 15:13:32 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:50.330 Initializing NVMe Controllers 00:08:50.330 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:50.330 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:50.330 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:50.330 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:50.330 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:50.330 00:08:50.330 real 0m0.323s 00:08:50.330 user 0m0.122s 00:08:50.330 sys 0m0.148s 00:08:50.330 15:13:33 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.330 ************************************ 00:08:50.330 END TEST nvme_reset 00:08:50.330 ************************************ 00:08:50.330 15:13:33 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:50.589 15:13:33 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:50.589 15:13:33 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:50.589 15:13:33 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.589 15:13:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.589 ************************************ 00:08:50.589 START TEST nvme_identify 00:08:50.589 ************************************ 00:08:50.589 15:13:33 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:08:50.589 15:13:33 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:50.589 15:13:33 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:50.589 15:13:33 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:50.589 15:13:33 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:50.589 15:13:33 nvme.nvme_identify -- common/autotest_common.sh@1494 -- # bdfs=() 00:08:50.589 15:13:33 nvme.nvme_identify -- common/autotest_common.sh@1494 -- # local bdfs 00:08:50.589 15:13:33 nvme.nvme_identify -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:50.589 15:13:33 nvme.nvme_identify -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:50.589 15:13:33 nvme.nvme_identify -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:08:50.589 15:13:33 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:08:50.589 15:13:33 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:50.589 15:13:33 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:50.852 [2024-10-25 15:13:33.491007] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64091 terminated unexpected 00:08:50.852 ===================================================== 00:08:50.852 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:50.852 ===================================================== 00:08:50.852 Controller Capabilities/Features 00:08:50.852 ================================ 00:08:50.852 Vendor ID: 1b36 00:08:50.852 Subsystem Vendor ID: 1af4 00:08:50.852 Serial Number: 12340 00:08:50.852 Model Number: QEMU NVMe Ctrl 00:08:50.852 Firmware Version: 8.0.0 00:08:50.852 Recommended Arb Burst: 6 00:08:50.852 IEEE OUI Identifier: 00 54 52 00:08:50.852 Multi-path I/O 00:08:50.852 May have multiple subsystem ports: No 00:08:50.852 May have multiple controllers: No 00:08:50.852 Associated with SR-IOV VF: No 00:08:50.852 Max Data Transfer Size: 524288 00:08:50.852 Max Number of Namespaces: 256 00:08:50.852 Max Number of I/O Queues: 64 00:08:50.852 NVMe Specification Version (VS): 1.4 00:08:50.852 NVMe Specification Version (Identify): 1.4 00:08:50.852 Maximum Queue Entries: 2048 00:08:50.852 Contiguous Queues Required: Yes 00:08:50.852 Arbitration Mechanisms Supported 00:08:50.852 Weighted Round Robin: Not Supported 00:08:50.852 Vendor Specific: Not Supported 00:08:50.852 Reset Timeout: 7500 ms 00:08:50.852 Doorbell Stride: 4 bytes 00:08:50.852 NVM Subsystem Reset: Not Supported 00:08:50.852 Command Sets Supported 00:08:50.852 NVM Command Set: Supported 00:08:50.852 Boot Partition: Not Supported 00:08:50.852 Memory Page Size Minimum: 4096 bytes 00:08:50.852 Memory Page Size Maximum: 65536 bytes 00:08:50.852 Persistent Memory Region: Not Supported 00:08:50.852 Optional Asynchronous Events Supported 00:08:50.852 Namespace Attribute Notices: Supported 00:08:50.852 Firmware Activation Notices: Not Supported 00:08:50.852 ANA Change Notices: Not Supported 00:08:50.852 PLE Aggregate Log Change Notices: Not Supported 00:08:50.852 LBA Status Info Alert Notices: Not Supported 00:08:50.852 EGE Aggregate Log Change Notices: Not Supported 00:08:50.852 Normal NVM Subsystem Shutdown event: Not Supported 00:08:50.852 Zone Descriptor Change Notices: Not Supported 00:08:50.852 Discovery Log Change Notices: Not Supported 00:08:50.852 Controller Attributes 00:08:50.852 128-bit Host Identifier: Not Supported 00:08:50.852 Non-Operational Permissive Mode: Not Supported 00:08:50.852 NVM Sets: Not Supported 00:08:50.852 Read Recovery Levels: Not Supported 00:08:50.852 Endurance Groups: Not Supported 00:08:50.852 Predictable Latency Mode: Not Supported 00:08:50.852 Traffic Based Keep ALive: Not Supported 00:08:50.852 Namespace Granularity: Not Supported 00:08:50.852 SQ Associations: Not Supported 00:08:50.852 UUID List: Not Supported 00:08:50.852 Multi-Domain Subsystem: Not Supported 00:08:50.852 Fixed Capacity Management: Not Supported 00:08:50.852 Variable Capacity Management: Not Supported 00:08:50.852 Delete Endurance Group: Not Supported 00:08:50.852 Delete NVM Set: Not Supported 00:08:50.852 Extended LBA Formats Supported: Supported 00:08:50.852 Flexible Data Placement Supported: Not Supported 00:08:50.852 00:08:50.852 Controller Memory Buffer Support 00:08:50.852 ================================ 00:08:50.852 Supported: No 00:08:50.852 00:08:50.852 Persistent Memory Region Support 00:08:50.852 ================================ 00:08:50.852 Supported: No 00:08:50.852 00:08:50.852 Admin Command Set Attributes 00:08:50.852 ============================ 00:08:50.852 Security Send/Receive: Not Supported 00:08:50.852 Format NVM: Supported 00:08:50.852 Firmware Activate/Download: Not Supported 00:08:50.852 Namespace Management: Supported 00:08:50.852 Device Self-Test: Not Supported 00:08:50.852 Directives: Supported 00:08:50.852 NVMe-MI: Not Supported 00:08:50.852 Virtualization Management: Not Supported 00:08:50.852 Doorbell Buffer Config: Supported 00:08:50.852 Get LBA Status Capability: Not Supported 00:08:50.852 Command & Feature Lockdown Capability: Not Supported 00:08:50.852 Abort Command Limit: 4 00:08:50.852 Async Event Request Limit: 4 00:08:50.852 Number of Firmware Slots: N/A 00:08:50.852 Firmware Slot 1 Read-Only: N/A 00:08:50.852 Firmware Activation Without Reset: N/A 00:08:50.852 Multiple Update Detection Support: N/A 00:08:50.852 Firmware Update Granularity: No Information Provided 00:08:50.852 Per-Namespace SMART Log: Yes 00:08:50.852 Asymmetric Namespace Access Log Page: Not Supported 00:08:50.852 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:50.852 Command Effects Log Page: Supported 00:08:50.852 Get Log Page Extended Data: Supported 00:08:50.852 Telemetry Log Pages: Not Supported 00:08:50.852 Persistent Event Log Pages: Not Supported 00:08:50.852 Supported Log Pages Log Page: May Support 00:08:50.852 Commands Supported & Effects Log Page: Not Supported 00:08:50.852 Feature Identifiers & Effects Log Page:May Support 00:08:50.852 NVMe-MI Commands & Effects Log Page: May Support 00:08:50.852 Data Area 4 for Telemetry Log: Not Supported 00:08:50.852 Error Log Page Entries Supported: 1 00:08:50.852 Keep Alive: Not Supported 00:08:50.852 00:08:50.852 NVM Command Set Attributes 00:08:50.852 ========================== 00:08:50.852 Submission Queue Entry Size 00:08:50.852 Max: 64 00:08:50.852 Min: 64 00:08:50.852 Completion Queue Entry Size 00:08:50.852 Max: 16 00:08:50.852 Min: 16 00:08:50.852 Number of Namespaces: 256 00:08:50.852 Compare Command: Supported 00:08:50.852 Write Uncorrectable Command: Not Supported 00:08:50.852 Dataset Management Command: Supported 00:08:50.852 Write Zeroes Command: Supported 00:08:50.852 Set Features Save Field: Supported 00:08:50.852 Reservations: Not Supported 00:08:50.852 Timestamp: Supported 00:08:50.852 Copy: Supported 00:08:50.852 Volatile Write Cache: Present 00:08:50.852 Atomic Write Unit (Normal): 1 00:08:50.852 Atomic Write Unit (PFail): 1 00:08:50.852 Atomic Compare & Write Unit: 1 00:08:50.852 Fused Compare & Write: Not Supported 00:08:50.852 Scatter-Gather List 00:08:50.852 SGL Command Set: Supported 00:08:50.852 SGL Keyed: Not Supported 00:08:50.852 SGL Bit Bucket Descriptor: Not Supported 00:08:50.852 SGL Metadata Pointer: Not Supported 00:08:50.852 Oversized SGL: Not Supported 00:08:50.852 SGL Metadata Address: Not Supported 00:08:50.852 SGL Offset: Not Supported 00:08:50.852 Transport SGL Data Block: Not Supported 00:08:50.852 Replay Protected Memory Block: Not Supported 00:08:50.852 00:08:50.852 Firmware Slot Information 00:08:50.852 ========================= 00:08:50.852 Active slot: 1 00:08:50.852 Slot 1 Firmware Revision: 1.0 00:08:50.852 00:08:50.852 00:08:50.852 Commands Supported and Effects 00:08:50.852 ============================== 00:08:50.852 Admin Commands 00:08:50.852 -------------- 00:08:50.852 Delete I/O Submission Queue (00h): Supported 00:08:50.852 Create I/O Submission Queue (01h): Supported 00:08:50.852 Get Log Page (02h): Supported 00:08:50.852 Delete I/O Completion Queue (04h): Supported 00:08:50.852 Create I/O Completion Queue (05h): Supported 00:08:50.852 Identify (06h): Supported 00:08:50.852 Abort (08h): Supported 00:08:50.852 Set Features (09h): Supported 00:08:50.852 Get Features (0Ah): Supported 00:08:50.852 Asynchronous Event Request (0Ch): Supported 00:08:50.852 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:50.852 Directive Send (19h): Supported 00:08:50.852 Directive Receive (1Ah): Supported 00:08:50.852 Virtualization Management (1Ch): Supported 00:08:50.852 Doorbell Buffer Config (7Ch): Supported 00:08:50.852 Format NVM (80h): Supported LBA-Change 00:08:50.852 I/O Commands 00:08:50.852 ------------ 00:08:50.852 Flush (00h): Supported LBA-Change 00:08:50.852 Write (01h): Supported LBA-Change 00:08:50.852 Read (02h): Supported 00:08:50.852 Compare (05h): Supported 00:08:50.852 Write Zeroes (08h): Supported LBA-Change 00:08:50.852 Dataset Management (09h): Supported LBA-Change 00:08:50.852 Unknown (0Ch): Supported 00:08:50.852 Unknown (12h): Supported 00:08:50.852 Copy (19h): Supported LBA-Change 00:08:50.852 Unknown (1Dh): Supported LBA-Change 00:08:50.852 00:08:50.852 Error Log 00:08:50.852 ========= 00:08:50.852 00:08:50.852 Arbitration 00:08:50.852 =========== 00:08:50.852 Arbitration Burst: no limit 00:08:50.852 00:08:50.852 Power Management 00:08:50.852 ================ 00:08:50.852 Number of Power States: 1 00:08:50.852 Current Power State: Power State #0 00:08:50.852 Power State #0: 00:08:50.852 Max Power: 25.00 W 00:08:50.852 Non-Operational State: Operational 00:08:50.852 Entry Latency: 16 microseconds 00:08:50.852 Exit Latency: 4 microseconds 00:08:50.852 Relative Read Throughput: 0 00:08:50.852 Relative Read Latency: 0 00:08:50.852 Relative Write Throughput: 0 00:08:50.852 Relative Write Latency: 0 00:08:50.852 Idle Power: Not Reported 00:08:50.852 Active Power: Not Reported 00:08:50.852 Non-Operational Permissive Mode: Not Supported 00:08:50.852 00:08:50.852 Health Information 00:08:50.852 ================== 00:08:50.853 Critical Warnings: 00:08:50.853 Available Spare Space: OK 00:08:50.853 Temperature: OK 00:08:50.853 Device Reliability: OK 00:08:50.853 Read Only: No 00:08:50.853 Volatile Memory Backup: OK 00:08:50.853 Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.853 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:50.853 Available Spare: 0% 00:08:50.853 Available Spare Threshold: 0% 00:08:50.853 Life Percentage Used: 0% 00:08:50.853 Data Units Read: 792 00:08:50.853 Data Units Written: 720 00:08:50.853 Host Read Commands: 37769 00:08:50.853 Host Write Commands: 37555 00:08:50.853 Controller Busy Time: 0 minutes 00:08:50.853 Power Cycles: 0 00:08:50.853 Power On Hours: 0 hours 00:08:50.853 Unsafe Shutdowns: 0 00:08:50.853 Unrecoverable Media Errors: 0 00:08:50.853 Lifetime Error Log Entries: 0 00:08:50.853 Warning Temperature Time: 0 minutes 00:08:50.853 Critical Temperature Time: 0 minutes 00:08:50.853 00:08:50.853 Number of Queues 00:08:50.853 ================ 00:08:50.853 Number of I/O Submission Queues: 64 00:08:50.853 Number of I/O Completion Queues: 64 00:08:50.853 00:08:50.853 ZNS Specific Controller Data 00:08:50.853 ============================ 00:08:50.853 Zone Append Size Limit: 0 00:08:50.853 00:08:50.853 00:08:50.853 Active Namespaces 00:08:50.853 ================= 00:08:50.853 Namespace ID:1 00:08:50.853 Error Recovery Timeout: Unlimited 00:08:50.853 Command Set Identifier: NVM (00h) 00:08:50.853 Deallocate: Supported 00:08:50.853 Deallocated/Unwritten Error: Supported 00:08:50.853 Deallocated Read Value: All 0x00 00:08:50.853 Deallocate in Write Zeroes: Not Supported 00:08:50.853 Deallocated Guard Field: 0xFFFF 00:08:50.853 Flush: Supported 00:08:50.853 Reservation: Not Supported 00:08:50.853 Metadata Transferred as: Separate Metadata Buffer 00:08:50.853 Namespace Sharing Capabilities: Private 00:08:50.853 Size (in LBAs): 1548666 (5GiB) 00:08:50.853 Capacity (in LBAs): 1548666 (5GiB) 00:08:50.853 Utilization (in LBAs): 1548666 (5GiB) 00:08:50.853 Thin Provisioning: Not Supported 00:08:50.853 Per-NS Atomic Units: No 00:08:50.853 Maximum Single Source Range Length: 128 00:08:50.853 Maximum Copy Length: 128 00:08:50.853 Maximum Source Range Count: 128 00:08:50.853 NGUID/EUI64 Never Reused: No 00:08:50.853 Namespace Write Protected: No 00:08:50.853 Number of LBA Formats: 8 00:08:50.853 Current LBA Format: LBA Format #07 00:08:50.853 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.853 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.853 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.853 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.853 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.853 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.853 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.853 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.853 00:08:50.853 NVM Specific Namespace Data 00:08:50.853 =========================== 00:08:50.853 Logical Block Storage Tag Mask: 0 00:08:50.853 Protection Information Capabilities: 00:08:50.853 16b Guard Protection Information Storage Tag Support: No 00:08:50.853 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.853 Storage Tag Check Read Support: No 00:08:50.853 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.853 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.853 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.853 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.853 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.853 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.853 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.853 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.853 ===================================================== 00:08:50.853 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:50.853 ===================================================== 00:08:50.853 Controller Capabilities/Features 00:08:50.853 ================================ 00:08:50.853 Vendor ID: 1b36 00:08:50.853 Subsystem Vendor ID: 1af4 00:08:50.853 Serial Number: 12341 00:08:50.853 Model Number: QEMU NVMe Ctrl 00:08:50.853 Firmware Version: 8.0.0 00:08:50.853 Recommended Arb Burst: 6 00:08:50.853 IEEE OUI Identifier: 00 54 52 00:08:50.853 Multi-path I/O 00:08:50.853 May have multiple subsystem ports: No 00:08:50.853 May have multiple controllers: No 00:08:50.853 Associated with SR-IOV VF: No 00:08:50.853 Max Data Transfer Size: 524288 00:08:50.853 Max Number of Namespaces: 256 00:08:50.853 Max Number of I/O Queues: 64 00:08:50.853 NVMe Specification Version (VS): 1.4 00:08:50.853 NVMe Specification Version (Identify): 1.4 00:08:50.853 Maximum Queue Entries: 2048 00:08:50.853 Contiguous Queues Required: Yes 00:08:50.853 Arbitration Mechanisms Supported 00:08:50.853 Weighted Round Robin: Not Supported 00:08:50.853 Vendor Specific: Not Supported 00:08:50.853 Reset Timeout: 7500 ms 00:08:50.853 Doorbell Stride: 4 bytes 00:08:50.853 NVM Subsystem Reset: Not Supported 00:08:50.853 Command Sets Supported 00:08:50.853 NVM Command Set: Supported 00:08:50.853 Boot Partition: Not Supported 00:08:50.853 Memory Page Size Minimum: 4096 bytes 00:08:50.853 Memory Page Size Maximum: 65536 bytes 00:08:50.853 Persistent Memory Region: Not Supported 00:08:50.853 Optional Asynchronous Events Supported 00:08:50.853 Namespace Attribute Notices: Supported 00:08:50.853 Firmware Activation Notices: Not Supported 00:08:50.853 ANA Change Notices: Not Supported 00:08:50.853 PLE Aggregate Log Change Notices: Not Supported 00:08:50.853 LBA Status Info Alert Notices: Not Supported 00:08:50.853 EGE Aggregate Log Change Notices: Not Supported 00:08:50.853 Normal NVM Subsystem Shutdown event: Not Supported 00:08:50.853 Zone Descriptor Change Notices: Not Supported 00:08:50.853 Discovery Log Change Notices: Not Supported 00:08:50.853 Controller Attributes 00:08:50.853 128-bit Host Identifier: Not Supported 00:08:50.853 Non-Operational Permissive Mode: Not Supported 00:08:50.853 NVM Sets: Not Supported 00:08:50.853 Read Recovery Levels: Not Supported 00:08:50.853 Endurance Groups: Not Supported 00:08:50.853 Predictable Latency Mode: Not Supported 00:08:50.853 Traffic Based Keep ALive: Not Supported 00:08:50.853 Namespace Granularity: Not Supported 00:08:50.853 SQ Associations: Not Supported 00:08:50.853 UUID List: Not Supported 00:08:50.853 Multi-Domain Subsystem: Not Supported 00:08:50.853 Fixed Capacity Management: Not Supported 00:08:50.853 Variable Capacity Management: Not Supported 00:08:50.853 Delete Endurance Group: Not Supported 00:08:50.853 Delete NVM Set: Not Supported 00:08:50.853 Extended LBA Formats Supported: Supported 00:08:50.853 Flexible Data Placement Supported: Not Supported 00:08:50.853 00:08:50.853 Controller Memory Buffer Support 00:08:50.853 ================================ 00:08:50.853 Supported: No 00:08:50.853 00:08:50.853 Persistent Memory Region Support 00:08:50.853 ================================ 00:08:50.853 Supported: No 00:08:50.853 00:08:50.853 Admin Command Set Attributes 00:08:50.853 ============================ 00:08:50.853 Security Send/Receive: Not Supported 00:08:50.853 Format NVM: Supported 00:08:50.853 Firmware Activate/Download: Not Supported 00:08:50.853 Namespace Management: Supported 00:08:50.853 Device Self-Test: Not Supported 00:08:50.853 Directives: Supported 00:08:50.853 NVMe-MI: Not Supported 00:08:50.853 Virtualization Management: Not Supported 00:08:50.853 Doorbell Buffer Config: Supported 00:08:50.853 Get LBA Status Capability: Not Supported 00:08:50.853 Command & Feature Lockdown Capability: Not Supported 00:08:50.853 Abort Command Limit: 4 00:08:50.853 Async Event Request Limit: 4 00:08:50.853 Number of Firmware Slots: N/A 00:08:50.853 Firmware Slot 1 Read-Only: N/A 00:08:50.853 Firmware Activation Without Reset: N/A 00:08:50.853 Multiple Update Detection Support: N/A 00:08:50.853 Firmware Update Granularity: No Information Provided 00:08:50.853 Per-Namespace SMART Log: Yes 00:08:50.853 Asymmetric Namespace Access Log Page: Not Supported 00:08:50.853 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:50.853 Command Effects Log Page: Supported 00:08:50.853 Get Log Page Extended Data: Supported 00:08:50.853 Telemetry Log Pages: Not Supported 00:08:50.853 Persistent Event Log Pages: Not Supported 00:08:50.853 Supported Log Pages Log Page: May Support 00:08:50.853 Commands Supported & Effects Log Page: Not Supported 00:08:50.853 Feature Identifiers & Effects Log Page:May Support 00:08:50.853 NVMe-MI Commands & Effects Log Page: May Support 00:08:50.853 Data Area 4 for Telemetry Log: Not Supported 00:08:50.853 Error Log Page Entries Supported: 1 00:08:50.853 Keep Alive: Not Supported 00:08:50.853 00:08:50.853 NVM Command Set Attributes 00:08:50.853 ========================== 00:08:50.853 Submission Queue Entry Size 00:08:50.853 Max: 64 00:08:50.853 Min: 64 00:08:50.853 Completion Queue Entry Size 00:08:50.853 Max: 16 00:08:50.853 Min: 16 00:08:50.853 Number of Namespaces: 256 00:08:50.853 Compare Command: Supported 00:08:50.853 Write Uncorrectable Command: Not Supported 00:08:50.853 Dataset Management Command: Supported 00:08:50.853 Write Zeroes Command: Supported 00:08:50.853 Set Features Save Field: Supported 00:08:50.853 Reservations: Not Supported 00:08:50.854 Timestamp: Supported 00:08:50.854 Copy: Supported 00:08:50.854 Volatile Write Cache: Present 00:08:50.854 Atomic Write Unit (Normal): 1 00:08:50.854 Atomic Write Unit (PFail): 1 00:08:50.854 Atomic Compare & Write Unit: 1 00:08:50.854 Fused Compare & Write: Not Supported 00:08:50.854 Scatter-Gather List 00:08:50.854 SGL Command Set: Supported 00:08:50.854 SGL Keyed: Not Supported 00:08:50.854 SGL Bit Bucket Descriptor: Not Supported 00:08:50.854 SGL Metadata Pointer: Not Supported 00:08:50.854 Oversized SGL: Not Supported 00:08:50.854 SGL Metadata Address: Not Supported 00:08:50.854 SGL Offset: Not Supported 00:08:50.854 Transport SGL Data Block: Not Supported 00:08:50.854 Replay Protected Memory Block: Not Supported 00:08:50.854 00:08:50.854 Firmware Slot Information 00:08:50.854 ========================= 00:08:50.854 Active slot: 1 00:08:50.854 Slot 1 Firmware Revision: 1.0 00:08:50.854 00:08:50.854 00:08:50.854 Commands Supported and Effects 00:08:50.854 ============================== 00:08:50.854 Admin Commands 00:08:50.854 -------------- 00:08:50.854 Delete I/O Submission Queue (00h): Supported 00:08:50.854 Create I/O Submission Queue (01h): Supported 00:08:50.854 Get Log Page (02h): Supported 00:08:50.854 Delete I/O Completion Queue (04h): Supported 00:08:50.854 Create I/O Completion Queue (05h): Supported 00:08:50.854 Identify (06h): Supported 00:08:50.854 Abort (08h): Supported 00:08:50.854 Set Features (09h): Supported 00:08:50.854 Get Features (0Ah): Supported 00:08:50.854 Asynchronous Event Request (0Ch): Supported 00:08:50.854 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:50.854 Directive Send (19h): Supported 00:08:50.854 Directive Receive (1Ah): Supported 00:08:50.854 Virtualization Management (1Ch): Supported 00:08:50.854 Doorbell Buffer Config (7Ch): Supported 00:08:50.854 Format NVM (80h): Supported LBA-Change 00:08:50.854 I/O Commands 00:08:50.854 ------------ 00:08:50.854 Flush (00h): Supported LBA-Change 00:08:50.854 Write (01h): Supported LBA-Change 00:08:50.854 Read (02h): Supported 00:08:50.854 Compare (05h): Supported 00:08:50.854 Write Zeroes (08h): Supported LBA-Change 00:08:50.854 Dataset Management (09h): Supported LBA-Change 00:08:50.854 Unknown (0Ch): Supported 00:08:50.854 Unknown (12h): Supported 00:08:50.854 Copy (19h): Supported LBA-Change 00:08:50.854 Unknown (1Dh): Supported LBA-Change 00:08:50.854 00:08:50.854 Error Log 00:08:50.854 ========= 00:08:50.854 00:08:50.854 Arbitration 00:08:50.854 =========== 00:08:50.854 Arbitration Burst: no limit 00:08:50.854 00:08:50.854 Power Management 00:08:50.854 ================ 00:08:50.854 Number of Power States: 1 00:08:50.854 Current Power State: Power State #0 00:08:50.854 Power State #0: 00:08:50.854 Max Power: 25.00 W 00:08:50.854 Non-Operational State: Operational 00:08:50.854 Entry Latency: 16 microseconds 00:08:50.854 Exit Latency: 4 microseconds 00:08:50.854 Relative Read Throughput: 0 00:08:50.854 Relative Read Latency: 0 00:08:50.854 Relative Write Throughput: 0 00:08:50.854 Relative Write Latency: 0 00:08:50.854 Idle Power: Not Reported 00:08:50.854 Active Power: Not Reported 00:08:50.854 Non-Operational Permissive Mode: Not Supported 00:08:50.854 00:08:50.854 Health Information 00:08:50.854 ================== 00:08:50.854 Critical Warnings: 00:08:50.854 Available Spare Space: OK 00:08:50.854 Temperature: [2024-10-25 15:13:33.492076] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64091 terminated unexpected 00:08:50.854 [2024-10-25 15:13:33.492734] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64091 terminated unexpected 00:08:50.854 OK 00:08:50.854 Device Reliability: OK 00:08:50.854 Read Only: No 00:08:50.854 Volatile Memory Backup: OK 00:08:50.854 Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.854 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:50.854 Available Spare: 0% 00:08:50.854 Available Spare Threshold: 0% 00:08:50.854 Life Percentage Used: 0% 00:08:50.854 Data Units Read: 1198 00:08:50.854 Data Units Written: 1058 00:08:50.854 Host Read Commands: 56022 00:08:50.854 Host Write Commands: 54718 00:08:50.854 Controller Busy Time: 0 minutes 00:08:50.854 Power Cycles: 0 00:08:50.854 Power On Hours: 0 hours 00:08:50.854 Unsafe Shutdowns: 0 00:08:50.854 Unrecoverable Media Errors: 0 00:08:50.854 Lifetime Error Log Entries: 0 00:08:50.854 Warning Temperature Time: 0 minutes 00:08:50.854 Critical Temperature Time: 0 minutes 00:08:50.854 00:08:50.854 Number of Queues 00:08:50.854 ================ 00:08:50.854 Number of I/O Submission Queues: 64 00:08:50.854 Number of I/O Completion Queues: 64 00:08:50.854 00:08:50.854 ZNS Specific Controller Data 00:08:50.854 ============================ 00:08:50.854 Zone Append Size Limit: 0 00:08:50.854 00:08:50.854 00:08:50.854 Active Namespaces 00:08:50.854 ================= 00:08:50.854 Namespace ID:1 00:08:50.854 Error Recovery Timeout: Unlimited 00:08:50.854 Command Set Identifier: NVM (00h) 00:08:50.854 Deallocate: Supported 00:08:50.854 Deallocated/Unwritten Error: Supported 00:08:50.854 Deallocated Read Value: All 0x00 00:08:50.854 Deallocate in Write Zeroes: Not Supported 00:08:50.854 Deallocated Guard Field: 0xFFFF 00:08:50.854 Flush: Supported 00:08:50.854 Reservation: Not Supported 00:08:50.854 Namespace Sharing Capabilities: Private 00:08:50.854 Size (in LBAs): 1310720 (5GiB) 00:08:50.854 Capacity (in LBAs): 1310720 (5GiB) 00:08:50.854 Utilization (in LBAs): 1310720 (5GiB) 00:08:50.854 Thin Provisioning: Not Supported 00:08:50.854 Per-NS Atomic Units: No 00:08:50.854 Maximum Single Source Range Length: 128 00:08:50.854 Maximum Copy Length: 128 00:08:50.854 Maximum Source Range Count: 128 00:08:50.854 NGUID/EUI64 Never Reused: No 00:08:50.854 Namespace Write Protected: No 00:08:50.854 Number of LBA Formats: 8 00:08:50.854 Current LBA Format: LBA Format #04 00:08:50.854 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.854 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.854 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.854 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.854 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.854 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.854 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.854 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.854 00:08:50.854 NVM Specific Namespace Data 00:08:50.854 =========================== 00:08:50.854 Logical Block Storage Tag Mask: 0 00:08:50.854 Protection Information Capabilities: 00:08:50.854 16b Guard Protection Information Storage Tag Support: No 00:08:50.854 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.854 Storage Tag Check Read Support: No 00:08:50.854 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.854 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.854 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.854 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.854 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.854 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.854 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.854 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.854 ===================================================== 00:08:50.854 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:50.854 ===================================================== 00:08:50.854 Controller Capabilities/Features 00:08:50.854 ================================ 00:08:50.854 Vendor ID: 1b36 00:08:50.854 Subsystem Vendor ID: 1af4 00:08:50.854 Serial Number: 12343 00:08:50.854 Model Number: QEMU NVMe Ctrl 00:08:50.854 Firmware Version: 8.0.0 00:08:50.854 Recommended Arb Burst: 6 00:08:50.854 IEEE OUI Identifier: 00 54 52 00:08:50.854 Multi-path I/O 00:08:50.854 May have multiple subsystem ports: No 00:08:50.854 May have multiple controllers: Yes 00:08:50.854 Associated with SR-IOV VF: No 00:08:50.854 Max Data Transfer Size: 524288 00:08:50.854 Max Number of Namespaces: 256 00:08:50.854 Max Number of I/O Queues: 64 00:08:50.854 NVMe Specification Version (VS): 1.4 00:08:50.854 NVMe Specification Version (Identify): 1.4 00:08:50.854 Maximum Queue Entries: 2048 00:08:50.854 Contiguous Queues Required: Yes 00:08:50.854 Arbitration Mechanisms Supported 00:08:50.854 Weighted Round Robin: Not Supported 00:08:50.854 Vendor Specific: Not Supported 00:08:50.854 Reset Timeout: 7500 ms 00:08:50.854 Doorbell Stride: 4 bytes 00:08:50.854 NVM Subsystem Reset: Not Supported 00:08:50.854 Command Sets Supported 00:08:50.854 NVM Command Set: Supported 00:08:50.854 Boot Partition: Not Supported 00:08:50.854 Memory Page Size Minimum: 4096 bytes 00:08:50.854 Memory Page Size Maximum: 65536 bytes 00:08:50.854 Persistent Memory Region: Not Supported 00:08:50.854 Optional Asynchronous Events Supported 00:08:50.854 Namespace Attribute Notices: Supported 00:08:50.854 Firmware Activation Notices: Not Supported 00:08:50.854 ANA Change Notices: Not Supported 00:08:50.854 PLE Aggregate Log Change Notices: Not Supported 00:08:50.855 LBA Status Info Alert Notices: Not Supported 00:08:50.855 EGE Aggregate Log Change Notices: Not Supported 00:08:50.855 Normal NVM Subsystem Shutdown event: Not Supported 00:08:50.855 Zone Descriptor Change Notices: Not Supported 00:08:50.855 Discovery Log Change Notices: Not Supported 00:08:50.855 Controller Attributes 00:08:50.855 128-bit Host Identifier: Not Supported 00:08:50.855 Non-Operational Permissive Mode: Not Supported 00:08:50.855 NVM Sets: Not Supported 00:08:50.855 Read Recovery Levels: Not Supported 00:08:50.855 Endurance Groups: Supported 00:08:50.855 Predictable Latency Mode: Not Supported 00:08:50.855 Traffic Based Keep ALive: Not Supported 00:08:50.855 Namespace Granularity: Not Supported 00:08:50.855 SQ Associations: Not Supported 00:08:50.855 UUID List: Not Supported 00:08:50.855 Multi-Domain Subsystem: Not Supported 00:08:50.855 Fixed Capacity Management: Not Supported 00:08:50.855 Variable Capacity Management: Not Supported 00:08:50.855 Delete Endurance Group: Not Supported 00:08:50.855 Delete NVM Set: Not Supported 00:08:50.855 Extended LBA Formats Supported: Supported 00:08:50.855 Flexible Data Placement Supported: Supported 00:08:50.855 00:08:50.855 Controller Memory Buffer Support 00:08:50.855 ================================ 00:08:50.855 Supported: No 00:08:50.855 00:08:50.855 Persistent Memory Region Support 00:08:50.855 ================================ 00:08:50.855 Supported: No 00:08:50.855 00:08:50.855 Admin Command Set Attributes 00:08:50.855 ============================ 00:08:50.855 Security Send/Receive: Not Supported 00:08:50.855 Format NVM: Supported 00:08:50.855 Firmware Activate/Download: Not Supported 00:08:50.855 Namespace Management: Supported 00:08:50.855 Device Self-Test: Not Supported 00:08:50.855 Directives: Supported 00:08:50.855 NVMe-MI: Not Supported 00:08:50.855 Virtualization Management: Not Supported 00:08:50.855 Doorbell Buffer Config: Supported 00:08:50.855 Get LBA Status Capability: Not Supported 00:08:50.855 Command & Feature Lockdown Capability: Not Supported 00:08:50.855 Abort Command Limit: 4 00:08:50.855 Async Event Request Limit: 4 00:08:50.855 Number of Firmware Slots: N/A 00:08:50.855 Firmware Slot 1 Read-Only: N/A 00:08:50.855 Firmware Activation Without Reset: N/A 00:08:50.855 Multiple Update Detection Support: N/A 00:08:50.855 Firmware Update Granularity: No Information Provided 00:08:50.855 Per-Namespace SMART Log: Yes 00:08:50.855 Asymmetric Namespace Access Log Page: Not Supported 00:08:50.855 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:50.855 Command Effects Log Page: Supported 00:08:50.855 Get Log Page Extended Data: Supported 00:08:50.855 Telemetry Log Pages: Not Supported 00:08:50.855 Persistent Event Log Pages: Not Supported 00:08:50.855 Supported Log Pages Log Page: May Support 00:08:50.855 Commands Supported & Effects Log Page: Not Supported 00:08:50.855 Feature Identifiers & Effects Log Page:May Support 00:08:50.855 NVMe-MI Commands & Effects Log Page: May Support 00:08:50.855 Data Area 4 for Telemetry Log: Not Supported 00:08:50.855 Error Log Page Entries Supported: 1 00:08:50.855 Keep Alive: Not Supported 00:08:50.855 00:08:50.855 NVM Command Set Attributes 00:08:50.855 ========================== 00:08:50.855 Submission Queue Entry Size 00:08:50.855 Max: 64 00:08:50.855 Min: 64 00:08:50.855 Completion Queue Entry Size 00:08:50.855 Max: 16 00:08:50.855 Min: 16 00:08:50.855 Number of Namespaces: 256 00:08:50.855 Compare Command: Supported 00:08:50.855 Write Uncorrectable Command: Not Supported 00:08:50.855 Dataset Management Command: Supported 00:08:50.855 Write Zeroes Command: Supported 00:08:50.855 Set Features Save Field: Supported 00:08:50.855 Reservations: Not Supported 00:08:50.855 Timestamp: Supported 00:08:50.855 Copy: Supported 00:08:50.855 Volatile Write Cache: Present 00:08:50.855 Atomic Write Unit (Normal): 1 00:08:50.855 Atomic Write Unit (PFail): 1 00:08:50.855 Atomic Compare & Write Unit: 1 00:08:50.855 Fused Compare & Write: Not Supported 00:08:50.855 Scatter-Gather List 00:08:50.855 SGL Command Set: Supported 00:08:50.855 SGL Keyed: Not Supported 00:08:50.855 SGL Bit Bucket Descriptor: Not Supported 00:08:50.855 SGL Metadata Pointer: Not Supported 00:08:50.855 Oversized SGL: Not Supported 00:08:50.855 SGL Metadata Address: Not Supported 00:08:50.855 SGL Offset: Not Supported 00:08:50.855 Transport SGL Data Block: Not Supported 00:08:50.855 Replay Protected Memory Block: Not Supported 00:08:50.855 00:08:50.855 Firmware Slot Information 00:08:50.855 ========================= 00:08:50.855 Active slot: 1 00:08:50.855 Slot 1 Firmware Revision: 1.0 00:08:50.855 00:08:50.855 00:08:50.855 Commands Supported and Effects 00:08:50.855 ============================== 00:08:50.855 Admin Commands 00:08:50.855 -------------- 00:08:50.855 Delete I/O Submission Queue (00h): Supported 00:08:50.855 Create I/O Submission Queue (01h): Supported 00:08:50.855 Get Log Page (02h): Supported 00:08:50.855 Delete I/O Completion Queue (04h): Supported 00:08:50.855 Create I/O Completion Queue (05h): Supported 00:08:50.855 Identify (06h): Supported 00:08:50.855 Abort (08h): Supported 00:08:50.855 Set Features (09h): Supported 00:08:50.855 Get Features (0Ah): Supported 00:08:50.855 Asynchronous Event Request (0Ch): Supported 00:08:50.855 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:50.855 Directive Send (19h): Supported 00:08:50.855 Directive Receive (1Ah): Supported 00:08:50.855 Virtualization Management (1Ch): Supported 00:08:50.855 Doorbell Buffer Config (7Ch): Supported 00:08:50.855 Format NVM (80h): Supported LBA-Change 00:08:50.855 I/O Commands 00:08:50.855 ------------ 00:08:50.855 Flush (00h): Supported LBA-Change 00:08:50.855 Write (01h): Supported LBA-Change 00:08:50.855 Read (02h): Supported 00:08:50.855 Compare (05h): Supported 00:08:50.855 Write Zeroes (08h): Supported LBA-Change 00:08:50.855 Dataset Management (09h): Supported LBA-Change 00:08:50.855 Unknown (0Ch): Supported 00:08:50.855 Unknown (12h): Supported 00:08:50.855 Copy (19h): Supported LBA-Change 00:08:50.855 Unknown (1Dh): Supported LBA-Change 00:08:50.855 00:08:50.855 Error Log 00:08:50.855 ========= 00:08:50.855 00:08:50.855 Arbitration 00:08:50.855 =========== 00:08:50.855 Arbitration Burst: no limit 00:08:50.855 00:08:50.855 Power Management 00:08:50.855 ================ 00:08:50.855 Number of Power States: 1 00:08:50.855 Current Power State: Power State #0 00:08:50.855 Power State #0: 00:08:50.855 Max Power: 25.00 W 00:08:50.855 Non-Operational State: Operational 00:08:50.855 Entry Latency: 16 microseconds 00:08:50.855 Exit Latency: 4 microseconds 00:08:50.855 Relative Read Throughput: 0 00:08:50.855 Relative Read Latency: 0 00:08:50.855 Relative Write Throughput: 0 00:08:50.855 Relative Write Latency: 0 00:08:50.855 Idle Power: Not Reported 00:08:50.855 Active Power: Not Reported 00:08:50.855 Non-Operational Permissive Mode: Not Supported 00:08:50.855 00:08:50.855 Health Information 00:08:50.855 ================== 00:08:50.855 Critical Warnings: 00:08:50.855 Available Spare Space: OK 00:08:50.855 Temperature: OK 00:08:50.855 Device Reliability: OK 00:08:50.855 Read Only: No 00:08:50.855 Volatile Memory Backup: OK 00:08:50.855 Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.855 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:50.855 Available Spare: 0% 00:08:50.855 Available Spare Threshold: 0% 00:08:50.855 Life Percentage Used: 0% 00:08:50.855 Data Units Read: 920 00:08:50.855 Data Units Written: 849 00:08:50.855 Host Read Commands: 39064 00:08:50.855 Host Write Commands: 38487 00:08:50.855 Controller Busy Time: 0 minutes 00:08:50.855 Power Cycles: 0 00:08:50.855 Power On Hours: 0 hours 00:08:50.855 Unsafe Shutdowns: 0 00:08:50.855 Unrecoverable Media Errors: 0 00:08:50.855 Lifetime Error Log Entries: 0 00:08:50.855 Warning Temperature Time: 0 minutes 00:08:50.855 Critical Temperature Time: 0 minutes 00:08:50.855 00:08:50.855 Number of Queues 00:08:50.855 ================ 00:08:50.855 Number of I/O Submission Queues: 64 00:08:50.855 Number of I/O Completion Queues: 64 00:08:50.855 00:08:50.855 ZNS Specific Controller Data 00:08:50.855 ============================ 00:08:50.855 Zone Append Size Limit: 0 00:08:50.855 00:08:50.855 00:08:50.855 Active Namespaces 00:08:50.855 ================= 00:08:50.855 Namespace ID:1 00:08:50.855 Error Recovery Timeout: Unlimited 00:08:50.855 Command Set Identifier: NVM (00h) 00:08:50.855 Deallocate: Supported 00:08:50.855 Deallocated/Unwritten Error: Supported 00:08:50.855 Deallocated Read Value: All 0x00 00:08:50.855 Deallocate in Write Zeroes: Not Supported 00:08:50.855 Deallocated Guard Field: 0xFFFF 00:08:50.855 Flush: Supported 00:08:50.855 Reservation: Not Supported 00:08:50.855 Namespace Sharing Capabilities: Multiple Controllers 00:08:50.855 Size (in LBAs): 262144 (1GiB) 00:08:50.855 Capacity (in LBAs): 262144 (1GiB) 00:08:50.856 Utilization (in LBAs): 262144 (1GiB) 00:08:50.856 Thin Provisioning: Not Supported 00:08:50.856 Per-NS Atomic Units: No 00:08:50.856 Maximum Single Source Range Length: 128 00:08:50.856 Maximum Copy Length: 128 00:08:50.856 Maximum Source Range Count: 128 00:08:50.856 NGUID/EUI64 Never Reused: No 00:08:50.856 Namespace Write Protected: No 00:08:50.856 Endurance group ID: 1 00:08:50.856 Number of LBA Formats: 8 00:08:50.856 Current LBA Format: LBA Format #04 00:08:50.856 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.856 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.856 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.856 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.856 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.856 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.856 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.856 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.856 00:08:50.856 Get Feature FDP: 00:08:50.856 ================ 00:08:50.856 Enabled: Yes 00:08:50.856 FDP configuration index: 0 00:08:50.856 00:08:50.856 FDP configurations log page 00:08:50.856 =========================== 00:08:50.856 Number of FDP configurations: 1 00:08:50.856 Version: 0 00:08:50.856 Size: 112 00:08:50.856 FDP Configuration Descriptor: 0 00:08:50.856 Descriptor Size: 96 00:08:50.856 Reclaim Group Identifier format: 2 00:08:50.856 FDP Volatile Write Cache: Not Present 00:08:50.856 FDP Configuration: Valid 00:08:50.856 Vendor Specific Size: 0 00:08:50.856 Number of Reclaim Groups: 2 00:08:50.856 Number of Recalim Unit Handles: 8 00:08:50.856 Max Placement Identifiers: 128 00:08:50.856 Number of Namespaces Suppprted: 256 00:08:50.856 Reclaim unit Nominal Size: 6000000 bytes 00:08:50.856 Estimated Reclaim Unit Time Limit: Not Reported 00:08:50.856 RUH Desc #000: RUH Type: Initially Isolated 00:08:50.856 RUH Desc #001: RUH Type: Initially Isolated 00:08:50.856 RUH Desc #002: RUH Type: Initially Isolated 00:08:50.856 RUH Desc #003: RUH Type: Initially Isolated 00:08:50.856 RUH Desc #004: RUH Type: Initially Isolated 00:08:50.856 RUH Desc #005: RUH Type: Initially Isolated 00:08:50.856 RUH Desc #006: RUH Type: Initially Isolated 00:08:50.856 RUH Desc #007: RUH Type: Initially Isolated 00:08:50.856 00:08:50.856 FDP reclaim unit handle usage log page 00:08:50.856 ====================================== 00:08:50.856 Number of Reclaim Unit Handles: 8 00:08:50.856 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:50.856 RUH Usage Desc #001: RUH Attributes: Unused 00:08:50.856 RUH Usage Desc #002: RUH Attributes: Unused 00:08:50.856 RUH Usage Desc #003: RUH Attributes: Unused 00:08:50.856 RUH Usage Desc #004: RUH Attributes: Unused 00:08:50.856 RUH Usage Desc #005: RUH Attributes: Unused 00:08:50.856 RUH Usage Desc #006: RUH Attributes: Unused 00:08:50.856 RUH Usage Desc #007: RUH Attributes: Unused 00:08:50.856 00:08:50.856 FDP statistics log page 00:08:50.856 ======================= 00:08:50.856 Host bytes with metadata written: 540319744 00:08:50.856 Med[2024-10-25 15:13:33.493997] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64091 terminated unexpected 00:08:50.856 ia bytes with metadata written: 542932992 00:08:50.856 Media bytes erased: 0 00:08:50.856 00:08:50.856 FDP events log page 00:08:50.856 =================== 00:08:50.856 Number of FDP events: 0 00:08:50.856 00:08:50.856 NVM Specific Namespace Data 00:08:50.856 =========================== 00:08:50.856 Logical Block Storage Tag Mask: 0 00:08:50.856 Protection Information Capabilities: 00:08:50.856 16b Guard Protection Information Storage Tag Support: No 00:08:50.856 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.856 Storage Tag Check Read Support: No 00:08:50.856 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.856 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.856 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.856 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.856 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.856 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.856 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.856 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.856 ===================================================== 00:08:50.856 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:50.856 ===================================================== 00:08:50.856 Controller Capabilities/Features 00:08:50.856 ================================ 00:08:50.856 Vendor ID: 1b36 00:08:50.856 Subsystem Vendor ID: 1af4 00:08:50.856 Serial Number: 12342 00:08:50.856 Model Number: QEMU NVMe Ctrl 00:08:50.856 Firmware Version: 8.0.0 00:08:50.856 Recommended Arb Burst: 6 00:08:50.856 IEEE OUI Identifier: 00 54 52 00:08:50.856 Multi-path I/O 00:08:50.856 May have multiple subsystem ports: No 00:08:50.856 May have multiple controllers: No 00:08:50.856 Associated with SR-IOV VF: No 00:08:50.856 Max Data Transfer Size: 524288 00:08:50.856 Max Number of Namespaces: 256 00:08:50.856 Max Number of I/O Queues: 64 00:08:50.856 NVMe Specification Version (VS): 1.4 00:08:50.856 NVMe Specification Version (Identify): 1.4 00:08:50.856 Maximum Queue Entries: 2048 00:08:50.856 Contiguous Queues Required: Yes 00:08:50.856 Arbitration Mechanisms Supported 00:08:50.856 Weighted Round Robin: Not Supported 00:08:50.856 Vendor Specific: Not Supported 00:08:50.856 Reset Timeout: 7500 ms 00:08:50.856 Doorbell Stride: 4 bytes 00:08:50.856 NVM Subsystem Reset: Not Supported 00:08:50.856 Command Sets Supported 00:08:50.856 NVM Command Set: Supported 00:08:50.856 Boot Partition: Not Supported 00:08:50.856 Memory Page Size Minimum: 4096 bytes 00:08:50.856 Memory Page Size Maximum: 65536 bytes 00:08:50.856 Persistent Memory Region: Not Supported 00:08:50.856 Optional Asynchronous Events Supported 00:08:50.856 Namespace Attribute Notices: Supported 00:08:50.856 Firmware Activation Notices: Not Supported 00:08:50.856 ANA Change Notices: Not Supported 00:08:50.856 PLE Aggregate Log Change Notices: Not Supported 00:08:50.856 LBA Status Info Alert Notices: Not Supported 00:08:50.856 EGE Aggregate Log Change Notices: Not Supported 00:08:50.856 Normal NVM Subsystem Shutdown event: Not Supported 00:08:50.856 Zone Descriptor Change Notices: Not Supported 00:08:50.856 Discovery Log Change Notices: Not Supported 00:08:50.856 Controller Attributes 00:08:50.856 128-bit Host Identifier: Not Supported 00:08:50.856 Non-Operational Permissive Mode: Not Supported 00:08:50.856 NVM Sets: Not Supported 00:08:50.856 Read Recovery Levels: Not Supported 00:08:50.856 Endurance Groups: Not Supported 00:08:50.856 Predictable Latency Mode: Not Supported 00:08:50.856 Traffic Based Keep ALive: Not Supported 00:08:50.856 Namespace Granularity: Not Supported 00:08:50.856 SQ Associations: Not Supported 00:08:50.857 UUID List: Not Supported 00:08:50.857 Multi-Domain Subsystem: Not Supported 00:08:50.857 Fixed Capacity Management: Not Supported 00:08:50.857 Variable Capacity Management: Not Supported 00:08:50.857 Delete Endurance Group: Not Supported 00:08:50.857 Delete NVM Set: Not Supported 00:08:50.857 Extended LBA Formats Supported: Supported 00:08:50.857 Flexible Data Placement Supported: Not Supported 00:08:50.857 00:08:50.857 Controller Memory Buffer Support 00:08:50.857 ================================ 00:08:50.857 Supported: No 00:08:50.857 00:08:50.857 Persistent Memory Region Support 00:08:50.857 ================================ 00:08:50.857 Supported: No 00:08:50.857 00:08:50.857 Admin Command Set Attributes 00:08:50.857 ============================ 00:08:50.857 Security Send/Receive: Not Supported 00:08:50.857 Format NVM: Supported 00:08:50.857 Firmware Activate/Download: Not Supported 00:08:50.857 Namespace Management: Supported 00:08:50.857 Device Self-Test: Not Supported 00:08:50.857 Directives: Supported 00:08:50.857 NVMe-MI: Not Supported 00:08:50.857 Virtualization Management: Not Supported 00:08:50.857 Doorbell Buffer Config: Supported 00:08:50.857 Get LBA Status Capability: Not Supported 00:08:50.857 Command & Feature Lockdown Capability: Not Supported 00:08:50.857 Abort Command Limit: 4 00:08:50.857 Async Event Request Limit: 4 00:08:50.857 Number of Firmware Slots: N/A 00:08:50.857 Firmware Slot 1 Read-Only: N/A 00:08:50.857 Firmware Activation Without Reset: N/A 00:08:50.857 Multiple Update Detection Support: N/A 00:08:50.857 Firmware Update Granularity: No Information Provided 00:08:50.857 Per-Namespace SMART Log: Yes 00:08:50.857 Asymmetric Namespace Access Log Page: Not Supported 00:08:50.857 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:50.857 Command Effects Log Page: Supported 00:08:50.857 Get Log Page Extended Data: Supported 00:08:50.857 Telemetry Log Pages: Not Supported 00:08:50.857 Persistent Event Log Pages: Not Supported 00:08:50.857 Supported Log Pages Log Page: May Support 00:08:50.857 Commands Supported & Effects Log Page: Not Supported 00:08:50.857 Feature Identifiers & Effects Log Page:May Support 00:08:50.857 NVMe-MI Commands & Effects Log Page: May Support 00:08:50.857 Data Area 4 for Telemetry Log: Not Supported 00:08:50.857 Error Log Page Entries Supported: 1 00:08:50.857 Keep Alive: Not Supported 00:08:50.857 00:08:50.857 NVM Command Set Attributes 00:08:50.857 ========================== 00:08:50.857 Submission Queue Entry Size 00:08:50.857 Max: 64 00:08:50.857 Min: 64 00:08:50.857 Completion Queue Entry Size 00:08:50.857 Max: 16 00:08:50.857 Min: 16 00:08:50.857 Number of Namespaces: 256 00:08:50.857 Compare Command: Supported 00:08:50.857 Write Uncorrectable Command: Not Supported 00:08:50.857 Dataset Management Command: Supported 00:08:50.857 Write Zeroes Command: Supported 00:08:50.857 Set Features Save Field: Supported 00:08:50.857 Reservations: Not Supported 00:08:50.857 Timestamp: Supported 00:08:50.857 Copy: Supported 00:08:50.857 Volatile Write Cache: Present 00:08:50.857 Atomic Write Unit (Normal): 1 00:08:50.857 Atomic Write Unit (PFail): 1 00:08:50.857 Atomic Compare & Write Unit: 1 00:08:50.857 Fused Compare & Write: Not Supported 00:08:50.857 Scatter-Gather List 00:08:50.857 SGL Command Set: Supported 00:08:50.857 SGL Keyed: Not Supported 00:08:50.857 SGL Bit Bucket Descriptor: Not Supported 00:08:50.857 SGL Metadata Pointer: Not Supported 00:08:50.857 Oversized SGL: Not Supported 00:08:50.857 SGL Metadata Address: Not Supported 00:08:50.857 SGL Offset: Not Supported 00:08:50.857 Transport SGL Data Block: Not Supported 00:08:50.857 Replay Protected Memory Block: Not Supported 00:08:50.857 00:08:50.857 Firmware Slot Information 00:08:50.857 ========================= 00:08:50.857 Active slot: 1 00:08:50.857 Slot 1 Firmware Revision: 1.0 00:08:50.857 00:08:50.857 00:08:50.857 Commands Supported and Effects 00:08:50.857 ============================== 00:08:50.857 Admin Commands 00:08:50.857 -------------- 00:08:50.857 Delete I/O Submission Queue (00h): Supported 00:08:50.857 Create I/O Submission Queue (01h): Supported 00:08:50.857 Get Log Page (02h): Supported 00:08:50.857 Delete I/O Completion Queue (04h): Supported 00:08:50.857 Create I/O Completion Queue (05h): Supported 00:08:50.857 Identify (06h): Supported 00:08:50.857 Abort (08h): Supported 00:08:50.857 Set Features (09h): Supported 00:08:50.857 Get Features (0Ah): Supported 00:08:50.857 Asynchronous Event Request (0Ch): Supported 00:08:50.857 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:50.857 Directive Send (19h): Supported 00:08:50.857 Directive Receive (1Ah): Supported 00:08:50.857 Virtualization Management (1Ch): Supported 00:08:50.857 Doorbell Buffer Config (7Ch): Supported 00:08:50.857 Format NVM (80h): Supported LBA-Change 00:08:50.857 I/O Commands 00:08:50.857 ------------ 00:08:50.857 Flush (00h): Supported LBA-Change 00:08:50.857 Write (01h): Supported LBA-Change 00:08:50.857 Read (02h): Supported 00:08:50.857 Compare (05h): Supported 00:08:50.857 Write Zeroes (08h): Supported LBA-Change 00:08:50.857 Dataset Management (09h): Supported LBA-Change 00:08:50.857 Unknown (0Ch): Supported 00:08:50.857 Unknown (12h): Supported 00:08:50.857 Copy (19h): Supported LBA-Change 00:08:50.857 Unknown (1Dh): Supported LBA-Change 00:08:50.857 00:08:50.857 Error Log 00:08:50.857 ========= 00:08:50.857 00:08:50.857 Arbitration 00:08:50.857 =========== 00:08:50.857 Arbitration Burst: no limit 00:08:50.857 00:08:50.857 Power Management 00:08:50.857 ================ 00:08:50.857 Number of Power States: 1 00:08:50.857 Current Power State: Power State #0 00:08:50.857 Power State #0: 00:08:50.857 Max Power: 25.00 W 00:08:50.857 Non-Operational State: Operational 00:08:50.857 Entry Latency: 16 microseconds 00:08:50.857 Exit Latency: 4 microseconds 00:08:50.857 Relative Read Throughput: 0 00:08:50.857 Relative Read Latency: 0 00:08:50.857 Relative Write Throughput: 0 00:08:50.857 Relative Write Latency: 0 00:08:50.857 Idle Power: Not Reported 00:08:50.857 Active Power: Not Reported 00:08:50.857 Non-Operational Permissive Mode: Not Supported 00:08:50.857 00:08:50.857 Health Information 00:08:50.857 ================== 00:08:50.857 Critical Warnings: 00:08:50.857 Available Spare Space: OK 00:08:50.857 Temperature: OK 00:08:50.857 Device Reliability: OK 00:08:50.857 Read Only: No 00:08:50.857 Volatile Memory Backup: OK 00:08:50.857 Current Temperature: 323 Kelvin (50 Celsius) 00:08:50.857 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:50.857 Available Spare: 0% 00:08:50.857 Available Spare Threshold: 0% 00:08:50.857 Life Percentage Used: 0% 00:08:50.857 Data Units Read: 2517 00:08:50.857 Data Units Written: 2304 00:08:50.857 Host Read Commands: 115124 00:08:50.857 Host Write Commands: 113393 00:08:50.857 Controller Busy Time: 0 minutes 00:08:50.857 Power Cycles: 0 00:08:50.857 Power On Hours: 0 hours 00:08:50.857 Unsafe Shutdowns: 0 00:08:50.857 Unrecoverable Media Errors: 0 00:08:50.857 Lifetime Error Log Entries: 0 00:08:50.857 Warning Temperature Time: 0 minutes 00:08:50.857 Critical Temperature Time: 0 minutes 00:08:50.857 00:08:50.857 Number of Queues 00:08:50.857 ================ 00:08:50.857 Number of I/O Submission Queues: 64 00:08:50.857 Number of I/O Completion Queues: 64 00:08:50.857 00:08:50.857 ZNS Specific Controller Data 00:08:50.857 ============================ 00:08:50.857 Zone Append Size Limit: 0 00:08:50.857 00:08:50.857 00:08:50.857 Active Namespaces 00:08:50.857 ================= 00:08:50.857 Namespace ID:1 00:08:50.857 Error Recovery Timeout: Unlimited 00:08:50.857 Command Set Identifier: NVM (00h) 00:08:50.857 Deallocate: Supported 00:08:50.857 Deallocated/Unwritten Error: Supported 00:08:50.857 Deallocated Read Value: All 0x00 00:08:50.857 Deallocate in Write Zeroes: Not Supported 00:08:50.857 Deallocated Guard Field: 0xFFFF 00:08:50.857 Flush: Supported 00:08:50.857 Reservation: Not Supported 00:08:50.857 Namespace Sharing Capabilities: Private 00:08:50.857 Size (in LBAs): 1048576 (4GiB) 00:08:50.857 Capacity (in LBAs): 1048576 (4GiB) 00:08:50.857 Utilization (in LBAs): 1048576 (4GiB) 00:08:50.857 Thin Provisioning: Not Supported 00:08:50.857 Per-NS Atomic Units: No 00:08:50.857 Maximum Single Source Range Length: 128 00:08:50.857 Maximum Copy Length: 128 00:08:50.857 Maximum Source Range Count: 128 00:08:50.857 NGUID/EUI64 Never Reused: No 00:08:50.857 Namespace Write Protected: No 00:08:50.857 Number of LBA Formats: 8 00:08:50.857 Current LBA Format: LBA Format #04 00:08:50.857 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.857 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.857 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.857 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.857 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.857 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.857 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.857 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.857 00:08:50.857 NVM Specific Namespace Data 00:08:50.857 =========================== 00:08:50.857 Logical Block Storage Tag Mask: 0 00:08:50.858 Protection Information Capabilities: 00:08:50.858 16b Guard Protection Information Storage Tag Support: No 00:08:50.858 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.858 Storage Tag Check Read Support: No 00:08:50.858 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Namespace ID:2 00:08:50.858 Error Recovery Timeout: Unlimited 00:08:50.858 Command Set Identifier: NVM (00h) 00:08:50.858 Deallocate: Supported 00:08:50.858 Deallocated/Unwritten Error: Supported 00:08:50.858 Deallocated Read Value: All 0x00 00:08:50.858 Deallocate in Write Zeroes: Not Supported 00:08:50.858 Deallocated Guard Field: 0xFFFF 00:08:50.858 Flush: Supported 00:08:50.858 Reservation: Not Supported 00:08:50.858 Namespace Sharing Capabilities: Private 00:08:50.858 Size (in LBAs): 1048576 (4GiB) 00:08:50.858 Capacity (in LBAs): 1048576 (4GiB) 00:08:50.858 Utilization (in LBAs): 1048576 (4GiB) 00:08:50.858 Thin Provisioning: Not Supported 00:08:50.858 Per-NS Atomic Units: No 00:08:50.858 Maximum Single Source Range Length: 128 00:08:50.858 Maximum Copy Length: 128 00:08:50.858 Maximum Source Range Count: 128 00:08:50.858 NGUID/EUI64 Never Reused: No 00:08:50.858 Namespace Write Protected: No 00:08:50.858 Number of LBA Formats: 8 00:08:50.858 Current LBA Format: LBA Format #04 00:08:50.858 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.858 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.858 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.858 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.858 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.858 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.858 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.858 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.858 00:08:50.858 NVM Specific Namespace Data 00:08:50.858 =========================== 00:08:50.858 Logical Block Storage Tag Mask: 0 00:08:50.858 Protection Information Capabilities: 00:08:50.858 16b Guard Protection Information Storage Tag Support: No 00:08:50.858 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.858 Storage Tag Check Read Support: No 00:08:50.858 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Namespace ID:3 00:08:50.858 Error Recovery Timeout: Unlimited 00:08:50.858 Command Set Identifier: NVM (00h) 00:08:50.858 Deallocate: Supported 00:08:50.858 Deallocated/Unwritten Error: Supported 00:08:50.858 Deallocated Read Value: All 0x00 00:08:50.858 Deallocate in Write Zeroes: Not Supported 00:08:50.858 Deallocated Guard Field: 0xFFFF 00:08:50.858 Flush: Supported 00:08:50.858 Reservation: Not Supported 00:08:50.858 Namespace Sharing Capabilities: Private 00:08:50.858 Size (in LBAs): 1048576 (4GiB) 00:08:50.858 Capacity (in LBAs): 1048576 (4GiB) 00:08:50.858 Utilization (in LBAs): 1048576 (4GiB) 00:08:50.858 Thin Provisioning: Not Supported 00:08:50.858 Per-NS Atomic Units: No 00:08:50.858 Maximum Single Source Range Length: 128 00:08:50.858 Maximum Copy Length: 128 00:08:50.858 Maximum Source Range Count: 128 00:08:50.858 NGUID/EUI64 Never Reused: No 00:08:50.858 Namespace Write Protected: No 00:08:50.858 Number of LBA Formats: 8 00:08:50.858 Current LBA Format: LBA Format #04 00:08:50.858 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:50.858 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:50.858 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:50.858 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:50.858 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:50.858 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:50.858 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:50.858 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:50.858 00:08:50.858 NVM Specific Namespace Data 00:08:50.858 =========================== 00:08:50.858 Logical Block Storage Tag Mask: 0 00:08:50.858 Protection Information Capabilities: 00:08:50.858 16b Guard Protection Information Storage Tag Support: No 00:08:50.858 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:50.858 Storage Tag Check Read Support: No 00:08:50.858 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:50.858 15:13:33 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:50.858 15:13:33 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:51.118 ===================================================== 00:08:51.118 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:51.118 ===================================================== 00:08:51.118 Controller Capabilities/Features 00:08:51.118 ================================ 00:08:51.118 Vendor ID: 1b36 00:08:51.118 Subsystem Vendor ID: 1af4 00:08:51.118 Serial Number: 12340 00:08:51.118 Model Number: QEMU NVMe Ctrl 00:08:51.118 Firmware Version: 8.0.0 00:08:51.118 Recommended Arb Burst: 6 00:08:51.118 IEEE OUI Identifier: 00 54 52 00:08:51.118 Multi-path I/O 00:08:51.118 May have multiple subsystem ports: No 00:08:51.118 May have multiple controllers: No 00:08:51.118 Associated with SR-IOV VF: No 00:08:51.118 Max Data Transfer Size: 524288 00:08:51.118 Max Number of Namespaces: 256 00:08:51.118 Max Number of I/O Queues: 64 00:08:51.118 NVMe Specification Version (VS): 1.4 00:08:51.118 NVMe Specification Version (Identify): 1.4 00:08:51.118 Maximum Queue Entries: 2048 00:08:51.118 Contiguous Queues Required: Yes 00:08:51.118 Arbitration Mechanisms Supported 00:08:51.118 Weighted Round Robin: Not Supported 00:08:51.118 Vendor Specific: Not Supported 00:08:51.118 Reset Timeout: 7500 ms 00:08:51.118 Doorbell Stride: 4 bytes 00:08:51.118 NVM Subsystem Reset: Not Supported 00:08:51.118 Command Sets Supported 00:08:51.118 NVM Command Set: Supported 00:08:51.118 Boot Partition: Not Supported 00:08:51.118 Memory Page Size Minimum: 4096 bytes 00:08:51.118 Memory Page Size Maximum: 65536 bytes 00:08:51.118 Persistent Memory Region: Not Supported 00:08:51.118 Optional Asynchronous Events Supported 00:08:51.118 Namespace Attribute Notices: Supported 00:08:51.118 Firmware Activation Notices: Not Supported 00:08:51.118 ANA Change Notices: Not Supported 00:08:51.118 PLE Aggregate Log Change Notices: Not Supported 00:08:51.118 LBA Status Info Alert Notices: Not Supported 00:08:51.118 EGE Aggregate Log Change Notices: Not Supported 00:08:51.118 Normal NVM Subsystem Shutdown event: Not Supported 00:08:51.118 Zone Descriptor Change Notices: Not Supported 00:08:51.118 Discovery Log Change Notices: Not Supported 00:08:51.118 Controller Attributes 00:08:51.118 128-bit Host Identifier: Not Supported 00:08:51.118 Non-Operational Permissive Mode: Not Supported 00:08:51.118 NVM Sets: Not Supported 00:08:51.118 Read Recovery Levels: Not Supported 00:08:51.118 Endurance Groups: Not Supported 00:08:51.118 Predictable Latency Mode: Not Supported 00:08:51.118 Traffic Based Keep ALive: Not Supported 00:08:51.118 Namespace Granularity: Not Supported 00:08:51.118 SQ Associations: Not Supported 00:08:51.118 UUID List: Not Supported 00:08:51.118 Multi-Domain Subsystem: Not Supported 00:08:51.118 Fixed Capacity Management: Not Supported 00:08:51.118 Variable Capacity Management: Not Supported 00:08:51.118 Delete Endurance Group: Not Supported 00:08:51.118 Delete NVM Set: Not Supported 00:08:51.118 Extended LBA Formats Supported: Supported 00:08:51.118 Flexible Data Placement Supported: Not Supported 00:08:51.118 00:08:51.118 Controller Memory Buffer Support 00:08:51.118 ================================ 00:08:51.118 Supported: No 00:08:51.118 00:08:51.118 Persistent Memory Region Support 00:08:51.118 ================================ 00:08:51.118 Supported: No 00:08:51.118 00:08:51.118 Admin Command Set Attributes 00:08:51.118 ============================ 00:08:51.118 Security Send/Receive: Not Supported 00:08:51.118 Format NVM: Supported 00:08:51.118 Firmware Activate/Download: Not Supported 00:08:51.118 Namespace Management: Supported 00:08:51.118 Device Self-Test: Not Supported 00:08:51.118 Directives: Supported 00:08:51.118 NVMe-MI: Not Supported 00:08:51.118 Virtualization Management: Not Supported 00:08:51.118 Doorbell Buffer Config: Supported 00:08:51.118 Get LBA Status Capability: Not Supported 00:08:51.118 Command & Feature Lockdown Capability: Not Supported 00:08:51.118 Abort Command Limit: 4 00:08:51.118 Async Event Request Limit: 4 00:08:51.118 Number of Firmware Slots: N/A 00:08:51.118 Firmware Slot 1 Read-Only: N/A 00:08:51.118 Firmware Activation Without Reset: N/A 00:08:51.118 Multiple Update Detection Support: N/A 00:08:51.118 Firmware Update Granularity: No Information Provided 00:08:51.118 Per-Namespace SMART Log: Yes 00:08:51.118 Asymmetric Namespace Access Log Page: Not Supported 00:08:51.118 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:51.118 Command Effects Log Page: Supported 00:08:51.118 Get Log Page Extended Data: Supported 00:08:51.118 Telemetry Log Pages: Not Supported 00:08:51.118 Persistent Event Log Pages: Not Supported 00:08:51.118 Supported Log Pages Log Page: May Support 00:08:51.118 Commands Supported & Effects Log Page: Not Supported 00:08:51.118 Feature Identifiers & Effects Log Page:May Support 00:08:51.118 NVMe-MI Commands & Effects Log Page: May Support 00:08:51.118 Data Area 4 for Telemetry Log: Not Supported 00:08:51.118 Error Log Page Entries Supported: 1 00:08:51.118 Keep Alive: Not Supported 00:08:51.118 00:08:51.118 NVM Command Set Attributes 00:08:51.118 ========================== 00:08:51.118 Submission Queue Entry Size 00:08:51.118 Max: 64 00:08:51.118 Min: 64 00:08:51.118 Completion Queue Entry Size 00:08:51.118 Max: 16 00:08:51.118 Min: 16 00:08:51.118 Number of Namespaces: 256 00:08:51.118 Compare Command: Supported 00:08:51.119 Write Uncorrectable Command: Not Supported 00:08:51.119 Dataset Management Command: Supported 00:08:51.119 Write Zeroes Command: Supported 00:08:51.119 Set Features Save Field: Supported 00:08:51.119 Reservations: Not Supported 00:08:51.119 Timestamp: Supported 00:08:51.119 Copy: Supported 00:08:51.119 Volatile Write Cache: Present 00:08:51.119 Atomic Write Unit (Normal): 1 00:08:51.119 Atomic Write Unit (PFail): 1 00:08:51.119 Atomic Compare & Write Unit: 1 00:08:51.119 Fused Compare & Write: Not Supported 00:08:51.119 Scatter-Gather List 00:08:51.119 SGL Command Set: Supported 00:08:51.119 SGL Keyed: Not Supported 00:08:51.119 SGL Bit Bucket Descriptor: Not Supported 00:08:51.119 SGL Metadata Pointer: Not Supported 00:08:51.119 Oversized SGL: Not Supported 00:08:51.119 SGL Metadata Address: Not Supported 00:08:51.119 SGL Offset: Not Supported 00:08:51.119 Transport SGL Data Block: Not Supported 00:08:51.119 Replay Protected Memory Block: Not Supported 00:08:51.119 00:08:51.119 Firmware Slot Information 00:08:51.119 ========================= 00:08:51.119 Active slot: 1 00:08:51.119 Slot 1 Firmware Revision: 1.0 00:08:51.119 00:08:51.119 00:08:51.119 Commands Supported and Effects 00:08:51.119 ============================== 00:08:51.119 Admin Commands 00:08:51.119 -------------- 00:08:51.119 Delete I/O Submission Queue (00h): Supported 00:08:51.119 Create I/O Submission Queue (01h): Supported 00:08:51.119 Get Log Page (02h): Supported 00:08:51.119 Delete I/O Completion Queue (04h): Supported 00:08:51.119 Create I/O Completion Queue (05h): Supported 00:08:51.119 Identify (06h): Supported 00:08:51.119 Abort (08h): Supported 00:08:51.119 Set Features (09h): Supported 00:08:51.119 Get Features (0Ah): Supported 00:08:51.119 Asynchronous Event Request (0Ch): Supported 00:08:51.119 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:51.119 Directive Send (19h): Supported 00:08:51.119 Directive Receive (1Ah): Supported 00:08:51.119 Virtualization Management (1Ch): Supported 00:08:51.119 Doorbell Buffer Config (7Ch): Supported 00:08:51.119 Format NVM (80h): Supported LBA-Change 00:08:51.119 I/O Commands 00:08:51.119 ------------ 00:08:51.119 Flush (00h): Supported LBA-Change 00:08:51.119 Write (01h): Supported LBA-Change 00:08:51.119 Read (02h): Supported 00:08:51.119 Compare (05h): Supported 00:08:51.119 Write Zeroes (08h): Supported LBA-Change 00:08:51.119 Dataset Management (09h): Supported LBA-Change 00:08:51.119 Unknown (0Ch): Supported 00:08:51.119 Unknown (12h): Supported 00:08:51.119 Copy (19h): Supported LBA-Change 00:08:51.119 Unknown (1Dh): Supported LBA-Change 00:08:51.119 00:08:51.119 Error Log 00:08:51.119 ========= 00:08:51.119 00:08:51.119 Arbitration 00:08:51.119 =========== 00:08:51.119 Arbitration Burst: no limit 00:08:51.119 00:08:51.119 Power Management 00:08:51.119 ================ 00:08:51.119 Number of Power States: 1 00:08:51.119 Current Power State: Power State #0 00:08:51.119 Power State #0: 00:08:51.119 Max Power: 25.00 W 00:08:51.119 Non-Operational State: Operational 00:08:51.119 Entry Latency: 16 microseconds 00:08:51.119 Exit Latency: 4 microseconds 00:08:51.119 Relative Read Throughput: 0 00:08:51.119 Relative Read Latency: 0 00:08:51.119 Relative Write Throughput: 0 00:08:51.119 Relative Write Latency: 0 00:08:51.378 Idle Power: Not Reported 00:08:51.378 Active Power: Not Reported 00:08:51.378 Non-Operational Permissive Mode: Not Supported 00:08:51.378 00:08:51.378 Health Information 00:08:51.378 ================== 00:08:51.378 Critical Warnings: 00:08:51.378 Available Spare Space: OK 00:08:51.378 Temperature: OK 00:08:51.378 Device Reliability: OK 00:08:51.378 Read Only: No 00:08:51.378 Volatile Memory Backup: OK 00:08:51.378 Current Temperature: 323 Kelvin (50 Celsius) 00:08:51.378 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:51.378 Available Spare: 0% 00:08:51.378 Available Spare Threshold: 0% 00:08:51.378 Life Percentage Used: 0% 00:08:51.378 Data Units Read: 792 00:08:51.378 Data Units Written: 720 00:08:51.378 Host Read Commands: 37769 00:08:51.378 Host Write Commands: 37555 00:08:51.378 Controller Busy Time: 0 minutes 00:08:51.378 Power Cycles: 0 00:08:51.378 Power On Hours: 0 hours 00:08:51.378 Unsafe Shutdowns: 0 00:08:51.378 Unrecoverable Media Errors: 0 00:08:51.378 Lifetime Error Log Entries: 0 00:08:51.378 Warning Temperature Time: 0 minutes 00:08:51.378 Critical Temperature Time: 0 minutes 00:08:51.378 00:08:51.378 Number of Queues 00:08:51.378 ================ 00:08:51.378 Number of I/O Submission Queues: 64 00:08:51.378 Number of I/O Completion Queues: 64 00:08:51.378 00:08:51.378 ZNS Specific Controller Data 00:08:51.378 ============================ 00:08:51.378 Zone Append Size Limit: 0 00:08:51.378 00:08:51.378 00:08:51.378 Active Namespaces 00:08:51.378 ================= 00:08:51.378 Namespace ID:1 00:08:51.378 Error Recovery Timeout: Unlimited 00:08:51.378 Command Set Identifier: NVM (00h) 00:08:51.378 Deallocate: Supported 00:08:51.378 Deallocated/Unwritten Error: Supported 00:08:51.378 Deallocated Read Value: All 0x00 00:08:51.378 Deallocate in Write Zeroes: Not Supported 00:08:51.378 Deallocated Guard Field: 0xFFFF 00:08:51.378 Flush: Supported 00:08:51.378 Reservation: Not Supported 00:08:51.378 Metadata Transferred as: Separate Metadata Buffer 00:08:51.378 Namespace Sharing Capabilities: Private 00:08:51.378 Size (in LBAs): 1548666 (5GiB) 00:08:51.378 Capacity (in LBAs): 1548666 (5GiB) 00:08:51.378 Utilization (in LBAs): 1548666 (5GiB) 00:08:51.378 Thin Provisioning: Not Supported 00:08:51.378 Per-NS Atomic Units: No 00:08:51.378 Maximum Single Source Range Length: 128 00:08:51.378 Maximum Copy Length: 128 00:08:51.378 Maximum Source Range Count: 128 00:08:51.378 NGUID/EUI64 Never Reused: No 00:08:51.378 Namespace Write Protected: No 00:08:51.378 Number of LBA Formats: 8 00:08:51.378 Current LBA Format: LBA Format #07 00:08:51.378 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:51.378 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:51.378 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:51.378 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:51.378 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:51.378 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:51.378 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:51.378 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:51.378 00:08:51.378 NVM Specific Namespace Data 00:08:51.378 =========================== 00:08:51.378 Logical Block Storage Tag Mask: 0 00:08:51.378 Protection Information Capabilities: 00:08:51.378 16b Guard Protection Information Storage Tag Support: No 00:08:51.378 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:51.378 Storage Tag Check Read Support: No 00:08:51.378 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.378 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.378 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.378 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.378 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.378 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.378 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.378 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.378 15:13:33 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:51.378 15:13:33 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:51.639 ===================================================== 00:08:51.639 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:51.639 ===================================================== 00:08:51.639 Controller Capabilities/Features 00:08:51.639 ================================ 00:08:51.639 Vendor ID: 1b36 00:08:51.639 Subsystem Vendor ID: 1af4 00:08:51.639 Serial Number: 12341 00:08:51.639 Model Number: QEMU NVMe Ctrl 00:08:51.639 Firmware Version: 8.0.0 00:08:51.639 Recommended Arb Burst: 6 00:08:51.639 IEEE OUI Identifier: 00 54 52 00:08:51.639 Multi-path I/O 00:08:51.639 May have multiple subsystem ports: No 00:08:51.639 May have multiple controllers: No 00:08:51.639 Associated with SR-IOV VF: No 00:08:51.639 Max Data Transfer Size: 524288 00:08:51.639 Max Number of Namespaces: 256 00:08:51.639 Max Number of I/O Queues: 64 00:08:51.639 NVMe Specification Version (VS): 1.4 00:08:51.639 NVMe Specification Version (Identify): 1.4 00:08:51.639 Maximum Queue Entries: 2048 00:08:51.639 Contiguous Queues Required: Yes 00:08:51.639 Arbitration Mechanisms Supported 00:08:51.639 Weighted Round Robin: Not Supported 00:08:51.639 Vendor Specific: Not Supported 00:08:51.639 Reset Timeout: 7500 ms 00:08:51.639 Doorbell Stride: 4 bytes 00:08:51.639 NVM Subsystem Reset: Not Supported 00:08:51.639 Command Sets Supported 00:08:51.639 NVM Command Set: Supported 00:08:51.639 Boot Partition: Not Supported 00:08:51.639 Memory Page Size Minimum: 4096 bytes 00:08:51.639 Memory Page Size Maximum: 65536 bytes 00:08:51.639 Persistent Memory Region: Not Supported 00:08:51.639 Optional Asynchronous Events Supported 00:08:51.639 Namespace Attribute Notices: Supported 00:08:51.639 Firmware Activation Notices: Not Supported 00:08:51.639 ANA Change Notices: Not Supported 00:08:51.639 PLE Aggregate Log Change Notices: Not Supported 00:08:51.639 LBA Status Info Alert Notices: Not Supported 00:08:51.639 EGE Aggregate Log Change Notices: Not Supported 00:08:51.639 Normal NVM Subsystem Shutdown event: Not Supported 00:08:51.639 Zone Descriptor Change Notices: Not Supported 00:08:51.639 Discovery Log Change Notices: Not Supported 00:08:51.639 Controller Attributes 00:08:51.639 128-bit Host Identifier: Not Supported 00:08:51.639 Non-Operational Permissive Mode: Not Supported 00:08:51.639 NVM Sets: Not Supported 00:08:51.639 Read Recovery Levels: Not Supported 00:08:51.639 Endurance Groups: Not Supported 00:08:51.639 Predictable Latency Mode: Not Supported 00:08:51.639 Traffic Based Keep ALive: Not Supported 00:08:51.639 Namespace Granularity: Not Supported 00:08:51.639 SQ Associations: Not Supported 00:08:51.639 UUID List: Not Supported 00:08:51.639 Multi-Domain Subsystem: Not Supported 00:08:51.639 Fixed Capacity Management: Not Supported 00:08:51.639 Variable Capacity Management: Not Supported 00:08:51.639 Delete Endurance Group: Not Supported 00:08:51.639 Delete NVM Set: Not Supported 00:08:51.639 Extended LBA Formats Supported: Supported 00:08:51.639 Flexible Data Placement Supported: Not Supported 00:08:51.639 00:08:51.639 Controller Memory Buffer Support 00:08:51.639 ================================ 00:08:51.639 Supported: No 00:08:51.639 00:08:51.639 Persistent Memory Region Support 00:08:51.639 ================================ 00:08:51.639 Supported: No 00:08:51.639 00:08:51.639 Admin Command Set Attributes 00:08:51.639 ============================ 00:08:51.639 Security Send/Receive: Not Supported 00:08:51.639 Format NVM: Supported 00:08:51.639 Firmware Activate/Download: Not Supported 00:08:51.639 Namespace Management: Supported 00:08:51.639 Device Self-Test: Not Supported 00:08:51.639 Directives: Supported 00:08:51.639 NVMe-MI: Not Supported 00:08:51.639 Virtualization Management: Not Supported 00:08:51.639 Doorbell Buffer Config: Supported 00:08:51.639 Get LBA Status Capability: Not Supported 00:08:51.639 Command & Feature Lockdown Capability: Not Supported 00:08:51.639 Abort Command Limit: 4 00:08:51.639 Async Event Request Limit: 4 00:08:51.639 Number of Firmware Slots: N/A 00:08:51.639 Firmware Slot 1 Read-Only: N/A 00:08:51.639 Firmware Activation Without Reset: N/A 00:08:51.639 Multiple Update Detection Support: N/A 00:08:51.639 Firmware Update Granularity: No Information Provided 00:08:51.639 Per-Namespace SMART Log: Yes 00:08:51.639 Asymmetric Namespace Access Log Page: Not Supported 00:08:51.639 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:51.639 Command Effects Log Page: Supported 00:08:51.639 Get Log Page Extended Data: Supported 00:08:51.639 Telemetry Log Pages: Not Supported 00:08:51.639 Persistent Event Log Pages: Not Supported 00:08:51.639 Supported Log Pages Log Page: May Support 00:08:51.639 Commands Supported & Effects Log Page: Not Supported 00:08:51.639 Feature Identifiers & Effects Log Page:May Support 00:08:51.639 NVMe-MI Commands & Effects Log Page: May Support 00:08:51.639 Data Area 4 for Telemetry Log: Not Supported 00:08:51.639 Error Log Page Entries Supported: 1 00:08:51.639 Keep Alive: Not Supported 00:08:51.639 00:08:51.639 NVM Command Set Attributes 00:08:51.639 ========================== 00:08:51.639 Submission Queue Entry Size 00:08:51.639 Max: 64 00:08:51.639 Min: 64 00:08:51.640 Completion Queue Entry Size 00:08:51.640 Max: 16 00:08:51.640 Min: 16 00:08:51.640 Number of Namespaces: 256 00:08:51.640 Compare Command: Supported 00:08:51.640 Write Uncorrectable Command: Not Supported 00:08:51.640 Dataset Management Command: Supported 00:08:51.640 Write Zeroes Command: Supported 00:08:51.640 Set Features Save Field: Supported 00:08:51.640 Reservations: Not Supported 00:08:51.640 Timestamp: Supported 00:08:51.640 Copy: Supported 00:08:51.640 Volatile Write Cache: Present 00:08:51.640 Atomic Write Unit (Normal): 1 00:08:51.640 Atomic Write Unit (PFail): 1 00:08:51.640 Atomic Compare & Write Unit: 1 00:08:51.640 Fused Compare & Write: Not Supported 00:08:51.640 Scatter-Gather List 00:08:51.640 SGL Command Set: Supported 00:08:51.640 SGL Keyed: Not Supported 00:08:51.640 SGL Bit Bucket Descriptor: Not Supported 00:08:51.640 SGL Metadata Pointer: Not Supported 00:08:51.640 Oversized SGL: Not Supported 00:08:51.640 SGL Metadata Address: Not Supported 00:08:51.640 SGL Offset: Not Supported 00:08:51.640 Transport SGL Data Block: Not Supported 00:08:51.640 Replay Protected Memory Block: Not Supported 00:08:51.640 00:08:51.640 Firmware Slot Information 00:08:51.640 ========================= 00:08:51.640 Active slot: 1 00:08:51.640 Slot 1 Firmware Revision: 1.0 00:08:51.640 00:08:51.640 00:08:51.640 Commands Supported and Effects 00:08:51.640 ============================== 00:08:51.640 Admin Commands 00:08:51.640 -------------- 00:08:51.640 Delete I/O Submission Queue (00h): Supported 00:08:51.640 Create I/O Submission Queue (01h): Supported 00:08:51.640 Get Log Page (02h): Supported 00:08:51.640 Delete I/O Completion Queue (04h): Supported 00:08:51.640 Create I/O Completion Queue (05h): Supported 00:08:51.640 Identify (06h): Supported 00:08:51.640 Abort (08h): Supported 00:08:51.640 Set Features (09h): Supported 00:08:51.640 Get Features (0Ah): Supported 00:08:51.640 Asynchronous Event Request (0Ch): Supported 00:08:51.640 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:51.640 Directive Send (19h): Supported 00:08:51.640 Directive Receive (1Ah): Supported 00:08:51.640 Virtualization Management (1Ch): Supported 00:08:51.640 Doorbell Buffer Config (7Ch): Supported 00:08:51.640 Format NVM (80h): Supported LBA-Change 00:08:51.640 I/O Commands 00:08:51.640 ------------ 00:08:51.640 Flush (00h): Supported LBA-Change 00:08:51.640 Write (01h): Supported LBA-Change 00:08:51.640 Read (02h): Supported 00:08:51.640 Compare (05h): Supported 00:08:51.640 Write Zeroes (08h): Supported LBA-Change 00:08:51.640 Dataset Management (09h): Supported LBA-Change 00:08:51.640 Unknown (0Ch): Supported 00:08:51.640 Unknown (12h): Supported 00:08:51.640 Copy (19h): Supported LBA-Change 00:08:51.640 Unknown (1Dh): Supported LBA-Change 00:08:51.640 00:08:51.640 Error Log 00:08:51.640 ========= 00:08:51.640 00:08:51.640 Arbitration 00:08:51.640 =========== 00:08:51.640 Arbitration Burst: no limit 00:08:51.640 00:08:51.640 Power Management 00:08:51.640 ================ 00:08:51.640 Number of Power States: 1 00:08:51.640 Current Power State: Power State #0 00:08:51.640 Power State #0: 00:08:51.640 Max Power: 25.00 W 00:08:51.640 Non-Operational State: Operational 00:08:51.640 Entry Latency: 16 microseconds 00:08:51.640 Exit Latency: 4 microseconds 00:08:51.640 Relative Read Throughput: 0 00:08:51.640 Relative Read Latency: 0 00:08:51.640 Relative Write Throughput: 0 00:08:51.640 Relative Write Latency: 0 00:08:51.640 Idle Power: Not Reported 00:08:51.640 Active Power: Not Reported 00:08:51.640 Non-Operational Permissive Mode: Not Supported 00:08:51.640 00:08:51.640 Health Information 00:08:51.640 ================== 00:08:51.640 Critical Warnings: 00:08:51.640 Available Spare Space: OK 00:08:51.640 Temperature: OK 00:08:51.640 Device Reliability: OK 00:08:51.640 Read Only: No 00:08:51.640 Volatile Memory Backup: OK 00:08:51.640 Current Temperature: 323 Kelvin (50 Celsius) 00:08:51.640 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:51.640 Available Spare: 0% 00:08:51.640 Available Spare Threshold: 0% 00:08:51.640 Life Percentage Used: 0% 00:08:51.640 Data Units Read: 1198 00:08:51.640 Data Units Written: 1058 00:08:51.640 Host Read Commands: 56022 00:08:51.640 Host Write Commands: 54718 00:08:51.640 Controller Busy Time: 0 minutes 00:08:51.640 Power Cycles: 0 00:08:51.640 Power On Hours: 0 hours 00:08:51.640 Unsafe Shutdowns: 0 00:08:51.640 Unrecoverable Media Errors: 0 00:08:51.640 Lifetime Error Log Entries: 0 00:08:51.640 Warning Temperature Time: 0 minutes 00:08:51.640 Critical Temperature Time: 0 minutes 00:08:51.640 00:08:51.640 Number of Queues 00:08:51.640 ================ 00:08:51.640 Number of I/O Submission Queues: 64 00:08:51.640 Number of I/O Completion Queues: 64 00:08:51.640 00:08:51.640 ZNS Specific Controller Data 00:08:51.640 ============================ 00:08:51.640 Zone Append Size Limit: 0 00:08:51.640 00:08:51.640 00:08:51.640 Active Namespaces 00:08:51.640 ================= 00:08:51.640 Namespace ID:1 00:08:51.640 Error Recovery Timeout: Unlimited 00:08:51.640 Command Set Identifier: NVM (00h) 00:08:51.640 Deallocate: Supported 00:08:51.640 Deallocated/Unwritten Error: Supported 00:08:51.640 Deallocated Read Value: All 0x00 00:08:51.640 Deallocate in Write Zeroes: Not Supported 00:08:51.640 Deallocated Guard Field: 0xFFFF 00:08:51.640 Flush: Supported 00:08:51.640 Reservation: Not Supported 00:08:51.640 Namespace Sharing Capabilities: Private 00:08:51.640 Size (in LBAs): 1310720 (5GiB) 00:08:51.640 Capacity (in LBAs): 1310720 (5GiB) 00:08:51.640 Utilization (in LBAs): 1310720 (5GiB) 00:08:51.640 Thin Provisioning: Not Supported 00:08:51.640 Per-NS Atomic Units: No 00:08:51.640 Maximum Single Source Range Length: 128 00:08:51.640 Maximum Copy Length: 128 00:08:51.640 Maximum Source Range Count: 128 00:08:51.640 NGUID/EUI64 Never Reused: No 00:08:51.640 Namespace Write Protected: No 00:08:51.640 Number of LBA Formats: 8 00:08:51.640 Current LBA Format: LBA Format #04 00:08:51.640 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:51.640 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:51.640 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:51.640 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:51.640 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:51.640 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:51.640 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:51.640 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:51.640 00:08:51.640 NVM Specific Namespace Data 00:08:51.640 =========================== 00:08:51.640 Logical Block Storage Tag Mask: 0 00:08:51.640 Protection Information Capabilities: 00:08:51.640 16b Guard Protection Information Storage Tag Support: No 00:08:51.640 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:51.640 Storage Tag Check Read Support: No 00:08:51.640 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.640 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.640 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.640 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.640 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.640 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.640 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.640 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.640 15:13:34 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:51.640 15:13:34 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:51.901 ===================================================== 00:08:51.901 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:51.901 ===================================================== 00:08:51.901 Controller Capabilities/Features 00:08:51.901 ================================ 00:08:51.901 Vendor ID: 1b36 00:08:51.901 Subsystem Vendor ID: 1af4 00:08:51.901 Serial Number: 12342 00:08:51.901 Model Number: QEMU NVMe Ctrl 00:08:51.901 Firmware Version: 8.0.0 00:08:51.901 Recommended Arb Burst: 6 00:08:51.901 IEEE OUI Identifier: 00 54 52 00:08:51.901 Multi-path I/O 00:08:51.901 May have multiple subsystem ports: No 00:08:51.901 May have multiple controllers: No 00:08:51.901 Associated with SR-IOV VF: No 00:08:51.901 Max Data Transfer Size: 524288 00:08:51.901 Max Number of Namespaces: 256 00:08:51.901 Max Number of I/O Queues: 64 00:08:51.901 NVMe Specification Version (VS): 1.4 00:08:51.901 NVMe Specification Version (Identify): 1.4 00:08:51.901 Maximum Queue Entries: 2048 00:08:51.901 Contiguous Queues Required: Yes 00:08:51.901 Arbitration Mechanisms Supported 00:08:51.901 Weighted Round Robin: Not Supported 00:08:51.901 Vendor Specific: Not Supported 00:08:51.901 Reset Timeout: 7500 ms 00:08:51.901 Doorbell Stride: 4 bytes 00:08:51.901 NVM Subsystem Reset: Not Supported 00:08:51.901 Command Sets Supported 00:08:51.901 NVM Command Set: Supported 00:08:51.901 Boot Partition: Not Supported 00:08:51.901 Memory Page Size Minimum: 4096 bytes 00:08:51.901 Memory Page Size Maximum: 65536 bytes 00:08:51.901 Persistent Memory Region: Not Supported 00:08:51.901 Optional Asynchronous Events Supported 00:08:51.901 Namespace Attribute Notices: Supported 00:08:51.901 Firmware Activation Notices: Not Supported 00:08:51.901 ANA Change Notices: Not Supported 00:08:51.901 PLE Aggregate Log Change Notices: Not Supported 00:08:51.901 LBA Status Info Alert Notices: Not Supported 00:08:51.901 EGE Aggregate Log Change Notices: Not Supported 00:08:51.901 Normal NVM Subsystem Shutdown event: Not Supported 00:08:51.901 Zone Descriptor Change Notices: Not Supported 00:08:51.901 Discovery Log Change Notices: Not Supported 00:08:51.901 Controller Attributes 00:08:51.901 128-bit Host Identifier: Not Supported 00:08:51.901 Non-Operational Permissive Mode: Not Supported 00:08:51.901 NVM Sets: Not Supported 00:08:51.901 Read Recovery Levels: Not Supported 00:08:51.901 Endurance Groups: Not Supported 00:08:51.901 Predictable Latency Mode: Not Supported 00:08:51.901 Traffic Based Keep ALive: Not Supported 00:08:51.901 Namespace Granularity: Not Supported 00:08:51.901 SQ Associations: Not Supported 00:08:51.901 UUID List: Not Supported 00:08:51.901 Multi-Domain Subsystem: Not Supported 00:08:51.901 Fixed Capacity Management: Not Supported 00:08:51.901 Variable Capacity Management: Not Supported 00:08:51.901 Delete Endurance Group: Not Supported 00:08:51.901 Delete NVM Set: Not Supported 00:08:51.901 Extended LBA Formats Supported: Supported 00:08:51.901 Flexible Data Placement Supported: Not Supported 00:08:51.901 00:08:51.901 Controller Memory Buffer Support 00:08:51.901 ================================ 00:08:51.901 Supported: No 00:08:51.901 00:08:51.901 Persistent Memory Region Support 00:08:51.901 ================================ 00:08:51.901 Supported: No 00:08:51.901 00:08:51.901 Admin Command Set Attributes 00:08:51.901 ============================ 00:08:51.901 Security Send/Receive: Not Supported 00:08:51.901 Format NVM: Supported 00:08:51.901 Firmware Activate/Download: Not Supported 00:08:51.901 Namespace Management: Supported 00:08:51.901 Device Self-Test: Not Supported 00:08:51.901 Directives: Supported 00:08:51.901 NVMe-MI: Not Supported 00:08:51.901 Virtualization Management: Not Supported 00:08:51.901 Doorbell Buffer Config: Supported 00:08:51.901 Get LBA Status Capability: Not Supported 00:08:51.902 Command & Feature Lockdown Capability: Not Supported 00:08:51.902 Abort Command Limit: 4 00:08:51.902 Async Event Request Limit: 4 00:08:51.902 Number of Firmware Slots: N/A 00:08:51.902 Firmware Slot 1 Read-Only: N/A 00:08:51.902 Firmware Activation Without Reset: N/A 00:08:51.902 Multiple Update Detection Support: N/A 00:08:51.902 Firmware Update Granularity: No Information Provided 00:08:51.902 Per-Namespace SMART Log: Yes 00:08:51.902 Asymmetric Namespace Access Log Page: Not Supported 00:08:51.902 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:51.902 Command Effects Log Page: Supported 00:08:51.902 Get Log Page Extended Data: Supported 00:08:51.902 Telemetry Log Pages: Not Supported 00:08:51.902 Persistent Event Log Pages: Not Supported 00:08:51.902 Supported Log Pages Log Page: May Support 00:08:51.902 Commands Supported & Effects Log Page: Not Supported 00:08:51.902 Feature Identifiers & Effects Log Page:May Support 00:08:51.902 NVMe-MI Commands & Effects Log Page: May Support 00:08:51.902 Data Area 4 for Telemetry Log: Not Supported 00:08:51.902 Error Log Page Entries Supported: 1 00:08:51.902 Keep Alive: Not Supported 00:08:51.902 00:08:51.902 NVM Command Set Attributes 00:08:51.902 ========================== 00:08:51.902 Submission Queue Entry Size 00:08:51.902 Max: 64 00:08:51.902 Min: 64 00:08:51.902 Completion Queue Entry Size 00:08:51.902 Max: 16 00:08:51.902 Min: 16 00:08:51.902 Number of Namespaces: 256 00:08:51.902 Compare Command: Supported 00:08:51.902 Write Uncorrectable Command: Not Supported 00:08:51.902 Dataset Management Command: Supported 00:08:51.902 Write Zeroes Command: Supported 00:08:51.902 Set Features Save Field: Supported 00:08:51.902 Reservations: Not Supported 00:08:51.902 Timestamp: Supported 00:08:51.902 Copy: Supported 00:08:51.902 Volatile Write Cache: Present 00:08:51.902 Atomic Write Unit (Normal): 1 00:08:51.902 Atomic Write Unit (PFail): 1 00:08:51.902 Atomic Compare & Write Unit: 1 00:08:51.902 Fused Compare & Write: Not Supported 00:08:51.902 Scatter-Gather List 00:08:51.902 SGL Command Set: Supported 00:08:51.902 SGL Keyed: Not Supported 00:08:51.902 SGL Bit Bucket Descriptor: Not Supported 00:08:51.902 SGL Metadata Pointer: Not Supported 00:08:51.902 Oversized SGL: Not Supported 00:08:51.902 SGL Metadata Address: Not Supported 00:08:51.902 SGL Offset: Not Supported 00:08:51.902 Transport SGL Data Block: Not Supported 00:08:51.902 Replay Protected Memory Block: Not Supported 00:08:51.902 00:08:51.902 Firmware Slot Information 00:08:51.902 ========================= 00:08:51.902 Active slot: 1 00:08:51.902 Slot 1 Firmware Revision: 1.0 00:08:51.902 00:08:51.902 00:08:51.902 Commands Supported and Effects 00:08:51.902 ============================== 00:08:51.902 Admin Commands 00:08:51.902 -------------- 00:08:51.902 Delete I/O Submission Queue (00h): Supported 00:08:51.902 Create I/O Submission Queue (01h): Supported 00:08:51.902 Get Log Page (02h): Supported 00:08:51.902 Delete I/O Completion Queue (04h): Supported 00:08:51.902 Create I/O Completion Queue (05h): Supported 00:08:51.902 Identify (06h): Supported 00:08:51.902 Abort (08h): Supported 00:08:51.902 Set Features (09h): Supported 00:08:51.902 Get Features (0Ah): Supported 00:08:51.902 Asynchronous Event Request (0Ch): Supported 00:08:51.902 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:51.902 Directive Send (19h): Supported 00:08:51.902 Directive Receive (1Ah): Supported 00:08:51.902 Virtualization Management (1Ch): Supported 00:08:51.902 Doorbell Buffer Config (7Ch): Supported 00:08:51.902 Format NVM (80h): Supported LBA-Change 00:08:51.902 I/O Commands 00:08:51.902 ------------ 00:08:51.902 Flush (00h): Supported LBA-Change 00:08:51.902 Write (01h): Supported LBA-Change 00:08:51.902 Read (02h): Supported 00:08:51.902 Compare (05h): Supported 00:08:51.902 Write Zeroes (08h): Supported LBA-Change 00:08:51.902 Dataset Management (09h): Supported LBA-Change 00:08:51.902 Unknown (0Ch): Supported 00:08:51.902 Unknown (12h): Supported 00:08:51.902 Copy (19h): Supported LBA-Change 00:08:51.902 Unknown (1Dh): Supported LBA-Change 00:08:51.902 00:08:51.902 Error Log 00:08:51.902 ========= 00:08:51.902 00:08:51.902 Arbitration 00:08:51.902 =========== 00:08:51.902 Arbitration Burst: no limit 00:08:51.902 00:08:51.902 Power Management 00:08:51.902 ================ 00:08:51.902 Number of Power States: 1 00:08:51.902 Current Power State: Power State #0 00:08:51.902 Power State #0: 00:08:51.902 Max Power: 25.00 W 00:08:51.902 Non-Operational State: Operational 00:08:51.902 Entry Latency: 16 microseconds 00:08:51.902 Exit Latency: 4 microseconds 00:08:51.902 Relative Read Throughput: 0 00:08:51.902 Relative Read Latency: 0 00:08:51.902 Relative Write Throughput: 0 00:08:51.902 Relative Write Latency: 0 00:08:51.902 Idle Power: Not Reported 00:08:51.902 Active Power: Not Reported 00:08:51.902 Non-Operational Permissive Mode: Not Supported 00:08:51.902 00:08:51.902 Health Information 00:08:51.902 ================== 00:08:51.902 Critical Warnings: 00:08:51.902 Available Spare Space: OK 00:08:51.902 Temperature: OK 00:08:51.902 Device Reliability: OK 00:08:51.902 Read Only: No 00:08:51.902 Volatile Memory Backup: OK 00:08:51.902 Current Temperature: 323 Kelvin (50 Celsius) 00:08:51.902 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:51.902 Available Spare: 0% 00:08:51.902 Available Spare Threshold: 0% 00:08:51.902 Life Percentage Used: 0% 00:08:51.902 Data Units Read: 2517 00:08:51.902 Data Units Written: 2304 00:08:51.902 Host Read Commands: 115124 00:08:51.902 Host Write Commands: 113393 00:08:51.902 Controller Busy Time: 0 minutes 00:08:51.902 Power Cycles: 0 00:08:51.902 Power On Hours: 0 hours 00:08:51.902 Unsafe Shutdowns: 0 00:08:51.902 Unrecoverable Media Errors: 0 00:08:51.902 Lifetime Error Log Entries: 0 00:08:51.902 Warning Temperature Time: 0 minutes 00:08:51.902 Critical Temperature Time: 0 minutes 00:08:51.902 00:08:51.902 Number of Queues 00:08:51.902 ================ 00:08:51.902 Number of I/O Submission Queues: 64 00:08:51.902 Number of I/O Completion Queues: 64 00:08:51.902 00:08:51.902 ZNS Specific Controller Data 00:08:51.902 ============================ 00:08:51.902 Zone Append Size Limit: 0 00:08:51.902 00:08:51.902 00:08:51.902 Active Namespaces 00:08:51.902 ================= 00:08:51.902 Namespace ID:1 00:08:51.902 Error Recovery Timeout: Unlimited 00:08:51.902 Command Set Identifier: NVM (00h) 00:08:51.902 Deallocate: Supported 00:08:51.902 Deallocated/Unwritten Error: Supported 00:08:51.902 Deallocated Read Value: All 0x00 00:08:51.902 Deallocate in Write Zeroes: Not Supported 00:08:51.902 Deallocated Guard Field: 0xFFFF 00:08:51.902 Flush: Supported 00:08:51.903 Reservation: Not Supported 00:08:51.903 Namespace Sharing Capabilities: Private 00:08:51.903 Size (in LBAs): 1048576 (4GiB) 00:08:51.903 Capacity (in LBAs): 1048576 (4GiB) 00:08:51.903 Utilization (in LBAs): 1048576 (4GiB) 00:08:51.903 Thin Provisioning: Not Supported 00:08:51.903 Per-NS Atomic Units: No 00:08:51.903 Maximum Single Source Range Length: 128 00:08:51.903 Maximum Copy Length: 128 00:08:51.903 Maximum Source Range Count: 128 00:08:51.903 NGUID/EUI64 Never Reused: No 00:08:51.903 Namespace Write Protected: No 00:08:51.903 Number of LBA Formats: 8 00:08:51.903 Current LBA Format: LBA Format #04 00:08:51.903 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:51.903 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:51.903 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:51.903 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:51.903 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:51.903 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:51.903 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:51.903 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:51.903 00:08:51.903 NVM Specific Namespace Data 00:08:51.903 =========================== 00:08:51.903 Logical Block Storage Tag Mask: 0 00:08:51.903 Protection Information Capabilities: 00:08:51.903 16b Guard Protection Information Storage Tag Support: No 00:08:51.903 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:51.903 Storage Tag Check Read Support: No 00:08:51.903 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Namespace ID:2 00:08:51.903 Error Recovery Timeout: Unlimited 00:08:51.903 Command Set Identifier: NVM (00h) 00:08:51.903 Deallocate: Supported 00:08:51.903 Deallocated/Unwritten Error: Supported 00:08:51.903 Deallocated Read Value: All 0x00 00:08:51.903 Deallocate in Write Zeroes: Not Supported 00:08:51.903 Deallocated Guard Field: 0xFFFF 00:08:51.903 Flush: Supported 00:08:51.903 Reservation: Not Supported 00:08:51.903 Namespace Sharing Capabilities: Private 00:08:51.903 Size (in LBAs): 1048576 (4GiB) 00:08:51.903 Capacity (in LBAs): 1048576 (4GiB) 00:08:51.903 Utilization (in LBAs): 1048576 (4GiB) 00:08:51.903 Thin Provisioning: Not Supported 00:08:51.903 Per-NS Atomic Units: No 00:08:51.903 Maximum Single Source Range Length: 128 00:08:51.903 Maximum Copy Length: 128 00:08:51.903 Maximum Source Range Count: 128 00:08:51.903 NGUID/EUI64 Never Reused: No 00:08:51.903 Namespace Write Protected: No 00:08:51.903 Number of LBA Formats: 8 00:08:51.903 Current LBA Format: LBA Format #04 00:08:51.903 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:51.903 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:51.903 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:51.903 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:51.903 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:51.903 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:51.903 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:51.903 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:51.903 00:08:51.903 NVM Specific Namespace Data 00:08:51.903 =========================== 00:08:51.903 Logical Block Storage Tag Mask: 0 00:08:51.903 Protection Information Capabilities: 00:08:51.903 16b Guard Protection Information Storage Tag Support: No 00:08:51.903 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:51.903 Storage Tag Check Read Support: No 00:08:51.903 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Namespace ID:3 00:08:51.903 Error Recovery Timeout: Unlimited 00:08:51.903 Command Set Identifier: NVM (00h) 00:08:51.903 Deallocate: Supported 00:08:51.903 Deallocated/Unwritten Error: Supported 00:08:51.903 Deallocated Read Value: All 0x00 00:08:51.903 Deallocate in Write Zeroes: Not Supported 00:08:51.903 Deallocated Guard Field: 0xFFFF 00:08:51.903 Flush: Supported 00:08:51.903 Reservation: Not Supported 00:08:51.903 Namespace Sharing Capabilities: Private 00:08:51.903 Size (in LBAs): 1048576 (4GiB) 00:08:51.903 Capacity (in LBAs): 1048576 (4GiB) 00:08:51.903 Utilization (in LBAs): 1048576 (4GiB) 00:08:51.903 Thin Provisioning: Not Supported 00:08:51.903 Per-NS Atomic Units: No 00:08:51.903 Maximum Single Source Range Length: 128 00:08:51.903 Maximum Copy Length: 128 00:08:51.903 Maximum Source Range Count: 128 00:08:51.903 NGUID/EUI64 Never Reused: No 00:08:51.903 Namespace Write Protected: No 00:08:51.903 Number of LBA Formats: 8 00:08:51.903 Current LBA Format: LBA Format #04 00:08:51.903 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:51.903 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:51.903 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:51.903 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:51.903 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:51.903 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:51.903 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:51.903 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:51.903 00:08:51.903 NVM Specific Namespace Data 00:08:51.903 =========================== 00:08:51.903 Logical Block Storage Tag Mask: 0 00:08:51.903 Protection Information Capabilities: 00:08:51.903 16b Guard Protection Information Storage Tag Support: No 00:08:51.903 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:51.903 Storage Tag Check Read Support: No 00:08:51.903 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:51.903 15:13:34 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:51.903 15:13:34 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:52.164 ===================================================== 00:08:52.164 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:52.164 ===================================================== 00:08:52.164 Controller Capabilities/Features 00:08:52.164 ================================ 00:08:52.164 Vendor ID: 1b36 00:08:52.164 Subsystem Vendor ID: 1af4 00:08:52.164 Serial Number: 12343 00:08:52.164 Model Number: QEMU NVMe Ctrl 00:08:52.164 Firmware Version: 8.0.0 00:08:52.164 Recommended Arb Burst: 6 00:08:52.164 IEEE OUI Identifier: 00 54 52 00:08:52.164 Multi-path I/O 00:08:52.164 May have multiple subsystem ports: No 00:08:52.164 May have multiple controllers: Yes 00:08:52.164 Associated with SR-IOV VF: No 00:08:52.164 Max Data Transfer Size: 524288 00:08:52.164 Max Number of Namespaces: 256 00:08:52.164 Max Number of I/O Queues: 64 00:08:52.164 NVMe Specification Version (VS): 1.4 00:08:52.164 NVMe Specification Version (Identify): 1.4 00:08:52.164 Maximum Queue Entries: 2048 00:08:52.164 Contiguous Queues Required: Yes 00:08:52.164 Arbitration Mechanisms Supported 00:08:52.164 Weighted Round Robin: Not Supported 00:08:52.164 Vendor Specific: Not Supported 00:08:52.164 Reset Timeout: 7500 ms 00:08:52.164 Doorbell Stride: 4 bytes 00:08:52.164 NVM Subsystem Reset: Not Supported 00:08:52.164 Command Sets Supported 00:08:52.164 NVM Command Set: Supported 00:08:52.164 Boot Partition: Not Supported 00:08:52.164 Memory Page Size Minimum: 4096 bytes 00:08:52.164 Memory Page Size Maximum: 65536 bytes 00:08:52.164 Persistent Memory Region: Not Supported 00:08:52.164 Optional Asynchronous Events Supported 00:08:52.164 Namespace Attribute Notices: Supported 00:08:52.164 Firmware Activation Notices: Not Supported 00:08:52.164 ANA Change Notices: Not Supported 00:08:52.164 PLE Aggregate Log Change Notices: Not Supported 00:08:52.164 LBA Status Info Alert Notices: Not Supported 00:08:52.164 EGE Aggregate Log Change Notices: Not Supported 00:08:52.164 Normal NVM Subsystem Shutdown event: Not Supported 00:08:52.164 Zone Descriptor Change Notices: Not Supported 00:08:52.164 Discovery Log Change Notices: Not Supported 00:08:52.164 Controller Attributes 00:08:52.164 128-bit Host Identifier: Not Supported 00:08:52.164 Non-Operational Permissive Mode: Not Supported 00:08:52.164 NVM Sets: Not Supported 00:08:52.164 Read Recovery Levels: Not Supported 00:08:52.164 Endurance Groups: Supported 00:08:52.164 Predictable Latency Mode: Not Supported 00:08:52.164 Traffic Based Keep ALive: Not Supported 00:08:52.164 Namespace Granularity: Not Supported 00:08:52.164 SQ Associations: Not Supported 00:08:52.164 UUID List: Not Supported 00:08:52.164 Multi-Domain Subsystem: Not Supported 00:08:52.164 Fixed Capacity Management: Not Supported 00:08:52.164 Variable Capacity Management: Not Supported 00:08:52.164 Delete Endurance Group: Not Supported 00:08:52.164 Delete NVM Set: Not Supported 00:08:52.164 Extended LBA Formats Supported: Supported 00:08:52.164 Flexible Data Placement Supported: Supported 00:08:52.164 00:08:52.164 Controller Memory Buffer Support 00:08:52.164 ================================ 00:08:52.164 Supported: No 00:08:52.164 00:08:52.164 Persistent Memory Region Support 00:08:52.164 ================================ 00:08:52.164 Supported: No 00:08:52.164 00:08:52.164 Admin Command Set Attributes 00:08:52.164 ============================ 00:08:52.164 Security Send/Receive: Not Supported 00:08:52.164 Format NVM: Supported 00:08:52.164 Firmware Activate/Download: Not Supported 00:08:52.164 Namespace Management: Supported 00:08:52.164 Device Self-Test: Not Supported 00:08:52.164 Directives: Supported 00:08:52.164 NVMe-MI: Not Supported 00:08:52.164 Virtualization Management: Not Supported 00:08:52.164 Doorbell Buffer Config: Supported 00:08:52.164 Get LBA Status Capability: Not Supported 00:08:52.164 Command & Feature Lockdown Capability: Not Supported 00:08:52.164 Abort Command Limit: 4 00:08:52.164 Async Event Request Limit: 4 00:08:52.164 Number of Firmware Slots: N/A 00:08:52.164 Firmware Slot 1 Read-Only: N/A 00:08:52.164 Firmware Activation Without Reset: N/A 00:08:52.164 Multiple Update Detection Support: N/A 00:08:52.164 Firmware Update Granularity: No Information Provided 00:08:52.164 Per-Namespace SMART Log: Yes 00:08:52.164 Asymmetric Namespace Access Log Page: Not Supported 00:08:52.164 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:52.164 Command Effects Log Page: Supported 00:08:52.164 Get Log Page Extended Data: Supported 00:08:52.164 Telemetry Log Pages: Not Supported 00:08:52.164 Persistent Event Log Pages: Not Supported 00:08:52.164 Supported Log Pages Log Page: May Support 00:08:52.164 Commands Supported & Effects Log Page: Not Supported 00:08:52.164 Feature Identifiers & Effects Log Page:May Support 00:08:52.164 NVMe-MI Commands & Effects Log Page: May Support 00:08:52.164 Data Area 4 for Telemetry Log: Not Supported 00:08:52.164 Error Log Page Entries Supported: 1 00:08:52.164 Keep Alive: Not Supported 00:08:52.164 00:08:52.164 NVM Command Set Attributes 00:08:52.164 ========================== 00:08:52.164 Submission Queue Entry Size 00:08:52.164 Max: 64 00:08:52.164 Min: 64 00:08:52.164 Completion Queue Entry Size 00:08:52.164 Max: 16 00:08:52.164 Min: 16 00:08:52.164 Number of Namespaces: 256 00:08:52.164 Compare Command: Supported 00:08:52.164 Write Uncorrectable Command: Not Supported 00:08:52.164 Dataset Management Command: Supported 00:08:52.164 Write Zeroes Command: Supported 00:08:52.164 Set Features Save Field: Supported 00:08:52.164 Reservations: Not Supported 00:08:52.164 Timestamp: Supported 00:08:52.164 Copy: Supported 00:08:52.164 Volatile Write Cache: Present 00:08:52.164 Atomic Write Unit (Normal): 1 00:08:52.164 Atomic Write Unit (PFail): 1 00:08:52.164 Atomic Compare & Write Unit: 1 00:08:52.164 Fused Compare & Write: Not Supported 00:08:52.164 Scatter-Gather List 00:08:52.164 SGL Command Set: Supported 00:08:52.164 SGL Keyed: Not Supported 00:08:52.164 SGL Bit Bucket Descriptor: Not Supported 00:08:52.164 SGL Metadata Pointer: Not Supported 00:08:52.164 Oversized SGL: Not Supported 00:08:52.164 SGL Metadata Address: Not Supported 00:08:52.165 SGL Offset: Not Supported 00:08:52.165 Transport SGL Data Block: Not Supported 00:08:52.165 Replay Protected Memory Block: Not Supported 00:08:52.165 00:08:52.165 Firmware Slot Information 00:08:52.165 ========================= 00:08:52.165 Active slot: 1 00:08:52.165 Slot 1 Firmware Revision: 1.0 00:08:52.165 00:08:52.165 00:08:52.165 Commands Supported and Effects 00:08:52.165 ============================== 00:08:52.165 Admin Commands 00:08:52.165 -------------- 00:08:52.165 Delete I/O Submission Queue (00h): Supported 00:08:52.165 Create I/O Submission Queue (01h): Supported 00:08:52.165 Get Log Page (02h): Supported 00:08:52.165 Delete I/O Completion Queue (04h): Supported 00:08:52.165 Create I/O Completion Queue (05h): Supported 00:08:52.165 Identify (06h): Supported 00:08:52.165 Abort (08h): Supported 00:08:52.165 Set Features (09h): Supported 00:08:52.165 Get Features (0Ah): Supported 00:08:52.165 Asynchronous Event Request (0Ch): Supported 00:08:52.165 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:52.165 Directive Send (19h): Supported 00:08:52.165 Directive Receive (1Ah): Supported 00:08:52.165 Virtualization Management (1Ch): Supported 00:08:52.165 Doorbell Buffer Config (7Ch): Supported 00:08:52.165 Format NVM (80h): Supported LBA-Change 00:08:52.165 I/O Commands 00:08:52.165 ------------ 00:08:52.165 Flush (00h): Supported LBA-Change 00:08:52.165 Write (01h): Supported LBA-Change 00:08:52.165 Read (02h): Supported 00:08:52.165 Compare (05h): Supported 00:08:52.165 Write Zeroes (08h): Supported LBA-Change 00:08:52.165 Dataset Management (09h): Supported LBA-Change 00:08:52.165 Unknown (0Ch): Supported 00:08:52.165 Unknown (12h): Supported 00:08:52.165 Copy (19h): Supported LBA-Change 00:08:52.165 Unknown (1Dh): Supported LBA-Change 00:08:52.165 00:08:52.165 Error Log 00:08:52.165 ========= 00:08:52.165 00:08:52.165 Arbitration 00:08:52.165 =========== 00:08:52.165 Arbitration Burst: no limit 00:08:52.165 00:08:52.165 Power Management 00:08:52.165 ================ 00:08:52.165 Number of Power States: 1 00:08:52.165 Current Power State: Power State #0 00:08:52.165 Power State #0: 00:08:52.165 Max Power: 25.00 W 00:08:52.165 Non-Operational State: Operational 00:08:52.165 Entry Latency: 16 microseconds 00:08:52.165 Exit Latency: 4 microseconds 00:08:52.165 Relative Read Throughput: 0 00:08:52.165 Relative Read Latency: 0 00:08:52.165 Relative Write Throughput: 0 00:08:52.165 Relative Write Latency: 0 00:08:52.165 Idle Power: Not Reported 00:08:52.165 Active Power: Not Reported 00:08:52.165 Non-Operational Permissive Mode: Not Supported 00:08:52.165 00:08:52.165 Health Information 00:08:52.165 ================== 00:08:52.165 Critical Warnings: 00:08:52.165 Available Spare Space: OK 00:08:52.165 Temperature: OK 00:08:52.165 Device Reliability: OK 00:08:52.165 Read Only: No 00:08:52.165 Volatile Memory Backup: OK 00:08:52.165 Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.165 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:52.165 Available Spare: 0% 00:08:52.165 Available Spare Threshold: 0% 00:08:52.165 Life Percentage Used: 0% 00:08:52.165 Data Units Read: 920 00:08:52.165 Data Units Written: 849 00:08:52.165 Host Read Commands: 39064 00:08:52.165 Host Write Commands: 38487 00:08:52.165 Controller Busy Time: 0 minutes 00:08:52.165 Power Cycles: 0 00:08:52.165 Power On Hours: 0 hours 00:08:52.165 Unsafe Shutdowns: 0 00:08:52.165 Unrecoverable Media Errors: 0 00:08:52.165 Lifetime Error Log Entries: 0 00:08:52.165 Warning Temperature Time: 0 minutes 00:08:52.165 Critical Temperature Time: 0 minutes 00:08:52.165 00:08:52.165 Number of Queues 00:08:52.165 ================ 00:08:52.165 Number of I/O Submission Queues: 64 00:08:52.165 Number of I/O Completion Queues: 64 00:08:52.165 00:08:52.165 ZNS Specific Controller Data 00:08:52.165 ============================ 00:08:52.165 Zone Append Size Limit: 0 00:08:52.165 00:08:52.165 00:08:52.165 Active Namespaces 00:08:52.165 ================= 00:08:52.165 Namespace ID:1 00:08:52.165 Error Recovery Timeout: Unlimited 00:08:52.165 Command Set Identifier: NVM (00h) 00:08:52.165 Deallocate: Supported 00:08:52.165 Deallocated/Unwritten Error: Supported 00:08:52.165 Deallocated Read Value: All 0x00 00:08:52.165 Deallocate in Write Zeroes: Not Supported 00:08:52.165 Deallocated Guard Field: 0xFFFF 00:08:52.165 Flush: Supported 00:08:52.165 Reservation: Not Supported 00:08:52.165 Namespace Sharing Capabilities: Multiple Controllers 00:08:52.165 Size (in LBAs): 262144 (1GiB) 00:08:52.165 Capacity (in LBAs): 262144 (1GiB) 00:08:52.165 Utilization (in LBAs): 262144 (1GiB) 00:08:52.165 Thin Provisioning: Not Supported 00:08:52.165 Per-NS Atomic Units: No 00:08:52.165 Maximum Single Source Range Length: 128 00:08:52.165 Maximum Copy Length: 128 00:08:52.165 Maximum Source Range Count: 128 00:08:52.165 NGUID/EUI64 Never Reused: No 00:08:52.165 Namespace Write Protected: No 00:08:52.165 Endurance group ID: 1 00:08:52.165 Number of LBA Formats: 8 00:08:52.165 Current LBA Format: LBA Format #04 00:08:52.165 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.165 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.165 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.165 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.165 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.165 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.165 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.165 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.165 00:08:52.165 Get Feature FDP: 00:08:52.165 ================ 00:08:52.165 Enabled: Yes 00:08:52.165 FDP configuration index: 0 00:08:52.165 00:08:52.165 FDP configurations log page 00:08:52.165 =========================== 00:08:52.165 Number of FDP configurations: 1 00:08:52.165 Version: 0 00:08:52.165 Size: 112 00:08:52.165 FDP Configuration Descriptor: 0 00:08:52.165 Descriptor Size: 96 00:08:52.165 Reclaim Group Identifier format: 2 00:08:52.165 FDP Volatile Write Cache: Not Present 00:08:52.165 FDP Configuration: Valid 00:08:52.165 Vendor Specific Size: 0 00:08:52.165 Number of Reclaim Groups: 2 00:08:52.165 Number of Recalim Unit Handles: 8 00:08:52.165 Max Placement Identifiers: 128 00:08:52.165 Number of Namespaces Suppprted: 256 00:08:52.165 Reclaim unit Nominal Size: 6000000 bytes 00:08:52.165 Estimated Reclaim Unit Time Limit: Not Reported 00:08:52.165 RUH Desc #000: RUH Type: Initially Isolated 00:08:52.165 RUH Desc #001: RUH Type: Initially Isolated 00:08:52.165 RUH Desc #002: RUH Type: Initially Isolated 00:08:52.165 RUH Desc #003: RUH Type: Initially Isolated 00:08:52.165 RUH Desc #004: RUH Type: Initially Isolated 00:08:52.165 RUH Desc #005: RUH Type: Initially Isolated 00:08:52.165 RUH Desc #006: RUH Type: Initially Isolated 00:08:52.165 RUH Desc #007: RUH Type: Initially Isolated 00:08:52.165 00:08:52.165 FDP reclaim unit handle usage log page 00:08:52.165 ====================================== 00:08:52.165 Number of Reclaim Unit Handles: 8 00:08:52.165 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:52.165 RUH Usage Desc #001: RUH Attributes: Unused 00:08:52.165 RUH Usage Desc #002: RUH Attributes: Unused 00:08:52.165 RUH Usage Desc #003: RUH Attributes: Unused 00:08:52.165 RUH Usage Desc #004: RUH Attributes: Unused 00:08:52.165 RUH Usage Desc #005: RUH Attributes: Unused 00:08:52.165 RUH Usage Desc #006: RUH Attributes: Unused 00:08:52.165 RUH Usage Desc #007: RUH Attributes: Unused 00:08:52.165 00:08:52.165 FDP statistics log page 00:08:52.165 ======================= 00:08:52.165 Host bytes with metadata written: 540319744 00:08:52.165 Media bytes with metadata written: 542932992 00:08:52.165 Media bytes erased: 0 00:08:52.165 00:08:52.165 FDP events log page 00:08:52.165 =================== 00:08:52.165 Number of FDP events: 0 00:08:52.165 00:08:52.165 NVM Specific Namespace Data 00:08:52.165 =========================== 00:08:52.165 Logical Block Storage Tag Mask: 0 00:08:52.165 Protection Information Capabilities: 00:08:52.165 16b Guard Protection Information Storage Tag Support: No 00:08:52.165 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.165 Storage Tag Check Read Support: No 00:08:52.165 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.165 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.165 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.165 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.165 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.165 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.165 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.165 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.165 00:08:52.165 real 0m1.756s 00:08:52.165 user 0m0.640s 00:08:52.165 sys 0m0.873s 00:08:52.165 15:13:34 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.165 15:13:34 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:52.165 ************************************ 00:08:52.165 END TEST nvme_identify 00:08:52.165 ************************************ 00:08:52.424 15:13:34 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:52.424 15:13:34 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:52.424 15:13:34 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.424 15:13:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.424 ************************************ 00:08:52.424 START TEST nvme_perf 00:08:52.424 ************************************ 00:08:52.424 15:13:34 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:08:52.425 15:13:34 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:53.800 Initializing NVMe Controllers 00:08:53.800 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:53.800 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:53.800 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:53.800 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:53.800 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:53.800 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:53.800 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:53.800 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:53.800 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:53.800 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:53.800 Initialization complete. Launching workers. 00:08:53.800 ======================================================== 00:08:53.800 Latency(us) 00:08:53.800 Device Information : IOPS MiB/s Average min max 00:08:53.800 PCIE (0000:00:10.0) NSID 1 from core 0: 13254.65 155.33 9677.91 8276.85 45234.30 00:08:53.800 PCIE (0000:00:11.0) NSID 1 from core 0: 13254.65 155.33 9663.25 8376.91 43344.07 00:08:53.800 PCIE (0000:00:13.0) NSID 1 from core 0: 13254.65 155.33 9646.14 8369.41 42319.77 00:08:53.800 PCIE (0000:00:12.0) NSID 1 from core 0: 13254.65 155.33 9629.74 8361.33 40543.36 00:08:53.800 PCIE (0000:00:12.0) NSID 2 from core 0: 13254.65 155.33 9613.33 8347.84 38824.19 00:08:53.800 PCIE (0000:00:12.0) NSID 3 from core 0: 13318.37 156.07 9550.85 8360.90 30464.31 00:08:53.800 ======================================================== 00:08:53.800 Total : 79591.61 932.71 9630.14 8276.85 45234.30 00:08:53.800 00:08:53.800 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:53.800 ================================================================================= 00:08:53.800 1.00000% : 8422.297us 00:08:53.800 10.00000% : 8685.494us 00:08:53.800 25.00000% : 8896.051us 00:08:53.800 50.00000% : 9211.888us 00:08:53.800 75.00000% : 9527.724us 00:08:53.800 90.00000% : 9896.199us 00:08:53.800 95.00000% : 10633.150us 00:08:53.800 98.00000% : 15897.086us 00:08:53.800 99.00000% : 19897.677us 00:08:53.800 99.50000% : 36847.550us 00:08:53.800 99.90000% : 44848.733us 00:08:53.800 99.99000% : 45269.847us 00:08:53.800 99.99900% : 45269.847us 00:08:53.800 99.99990% : 45269.847us 00:08:53.800 99.99999% : 45269.847us 00:08:53.800 00:08:53.800 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:53.800 ================================================================================= 00:08:53.800 1.00000% : 8527.576us 00:08:53.800 10.00000% : 8738.133us 00:08:53.800 25.00000% : 8948.691us 00:08:53.800 50.00000% : 9211.888us 00:08:53.800 75.00000% : 9475.084us 00:08:53.800 90.00000% : 9896.199us 00:08:53.800 95.00000% : 10580.511us 00:08:53.800 98.00000% : 16528.758us 00:08:53.800 99.00000% : 19897.677us 00:08:53.800 99.50000% : 35163.091us 00:08:53.800 99.90000% : 43164.273us 00:08:53.800 99.99000% : 43374.831us 00:08:53.800 99.99900% : 43374.831us 00:08:53.800 99.99990% : 43374.831us 00:08:53.800 99.99999% : 43374.831us 00:08:53.800 00:08:53.800 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:53.800 ================================================================================= 00:08:53.800 1.00000% : 8527.576us 00:08:53.800 10.00000% : 8738.133us 00:08:53.800 25.00000% : 8948.691us 00:08:53.800 50.00000% : 9211.888us 00:08:53.800 75.00000% : 9475.084us 00:08:53.800 90.00000% : 9843.560us 00:08:53.800 95.00000% : 10580.511us 00:08:53.800 98.00000% : 16949.873us 00:08:53.800 99.00000% : 20634.628us 00:08:53.800 99.50000% : 34110.304us 00:08:53.800 99.90000% : 42111.486us 00:08:53.800 99.99000% : 42322.043us 00:08:53.800 99.99900% : 42322.043us 00:08:53.800 99.99990% : 42322.043us 00:08:53.800 99.99999% : 42322.043us 00:08:53.800 00:08:53.800 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:53.800 ================================================================================= 00:08:53.800 1.00000% : 8527.576us 00:08:53.800 10.00000% : 8738.133us 00:08:53.800 25.00000% : 8948.691us 00:08:53.800 50.00000% : 9211.888us 00:08:53.800 75.00000% : 9475.084us 00:08:53.800 90.00000% : 9843.560us 00:08:53.800 95.00000% : 10580.511us 00:08:53.800 98.00000% : 16423.480us 00:08:53.800 99.00000% : 20845.186us 00:08:53.800 99.50000% : 32425.844us 00:08:53.800 99.90000% : 40216.469us 00:08:53.800 99.99000% : 40637.584us 00:08:53.800 99.99900% : 40637.584us 00:08:53.800 99.99990% : 40637.584us 00:08:53.800 99.99999% : 40637.584us 00:08:53.800 00:08:53.800 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:53.800 ================================================================================= 00:08:53.800 1.00000% : 8527.576us 00:08:53.800 10.00000% : 8738.133us 00:08:53.800 25.00000% : 8948.691us 00:08:53.800 50.00000% : 9211.888us 00:08:53.800 75.00000% : 9475.084us 00:08:53.800 90.00000% : 9843.560us 00:08:53.800 95.00000% : 10580.511us 00:08:53.800 98.00000% : 16002.365us 00:08:53.800 99.00000% : 20424.071us 00:08:53.800 99.50000% : 30741.385us 00:08:53.800 99.90000% : 38532.010us 00:08:53.800 99.99000% : 38953.124us 00:08:53.800 99.99900% : 38953.124us 00:08:53.800 99.99990% : 38953.124us 00:08:53.800 99.99999% : 38953.124us 00:08:53.800 00:08:53.800 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:53.800 ================================================================================= 00:08:53.800 1.00000% : 8527.576us 00:08:53.800 10.00000% : 8738.133us 00:08:53.800 25.00000% : 8948.691us 00:08:53.800 50.00000% : 9211.888us 00:08:53.800 75.00000% : 9475.084us 00:08:53.800 90.00000% : 9896.199us 00:08:53.800 95.00000% : 10843.708us 00:08:53.800 98.00000% : 15370.692us 00:08:53.800 99.00000% : 19687.120us 00:08:53.800 99.50000% : 22634.924us 00:08:53.800 99.90000% : 30320.270us 00:08:53.800 99.99000% : 30530.827us 00:08:53.800 99.99900% : 30530.827us 00:08:53.800 99.99990% : 30530.827us 00:08:53.800 99.99999% : 30530.827us 00:08:53.800 00:08:53.800 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:53.800 ============================================================================== 00:08:53.800 Range in us Cumulative IO count 00:08:53.800 8264.379 - 8317.018: 0.1427% ( 19) 00:08:53.800 8317.018 - 8369.658: 0.4808% ( 45) 00:08:53.800 8369.658 - 8422.297: 1.1118% ( 84) 00:08:53.800 8422.297 - 8474.937: 2.2085% ( 146) 00:08:53.800 8474.937 - 8527.576: 3.5832% ( 183) 00:08:53.800 8527.576 - 8580.215: 5.6040% ( 269) 00:08:53.800 8580.215 - 8632.855: 7.9327% ( 310) 00:08:53.800 8632.855 - 8685.494: 10.9150% ( 397) 00:08:53.800 8685.494 - 8738.133: 14.2803% ( 448) 00:08:53.800 8738.133 - 8790.773: 17.8861% ( 480) 00:08:53.800 8790.773 - 8843.412: 21.9201% ( 537) 00:08:53.800 8843.412 - 8896.051: 25.9240% ( 533) 00:08:53.800 8896.051 - 8948.691: 29.9504% ( 536) 00:08:53.800 8948.691 - 9001.330: 34.2022% ( 566) 00:08:53.800 9001.330 - 9053.969: 38.6043% ( 586) 00:08:53.800 9053.969 - 9106.609: 43.1265% ( 602) 00:08:53.800 9106.609 - 9159.248: 47.5511% ( 589) 00:08:53.800 9159.248 - 9211.888: 51.9907% ( 591) 00:08:53.800 9211.888 - 9264.527: 56.7007% ( 627) 00:08:53.800 9264.527 - 9317.166: 61.2079% ( 600) 00:08:53.800 9317.166 - 9369.806: 65.5950% ( 584) 00:08:53.800 9369.806 - 9422.445: 69.9069% ( 574) 00:08:53.800 9422.445 - 9475.084: 73.7831% ( 516) 00:08:53.800 9475.084 - 9527.724: 77.3513% ( 475) 00:08:53.800 9527.724 - 9580.363: 80.4237% ( 409) 00:08:53.800 9580.363 - 9633.002: 83.1731% ( 366) 00:08:53.800 9633.002 - 9685.642: 85.2539% ( 277) 00:08:53.800 9685.642 - 9738.281: 87.0643% ( 241) 00:08:53.800 9738.281 - 9790.920: 88.3864% ( 176) 00:08:53.800 9790.920 - 9843.560: 89.4832% ( 146) 00:08:53.800 9843.560 - 9896.199: 90.3696% ( 118) 00:08:53.800 9896.199 - 9948.839: 91.0532% ( 91) 00:08:53.800 9948.839 - 10001.478: 91.6992% ( 86) 00:08:53.800 10001.478 - 10054.117: 92.3002% ( 80) 00:08:53.800 10054.117 - 10106.757: 92.8260% ( 70) 00:08:53.800 10106.757 - 10159.396: 93.2016% ( 50) 00:08:53.800 10159.396 - 10212.035: 93.5096% ( 41) 00:08:53.800 10212.035 - 10264.675: 93.8326% ( 43) 00:08:53.800 10264.675 - 10317.314: 94.1106% ( 37) 00:08:53.800 10317.314 - 10369.953: 94.2909% ( 24) 00:08:53.800 10369.953 - 10422.593: 94.4937% ( 27) 00:08:53.800 10422.593 - 10475.232: 94.6214% ( 17) 00:08:53.800 10475.232 - 10527.871: 94.8017% ( 24) 00:08:53.800 10527.871 - 10580.511: 94.9144% ( 15) 00:08:53.800 10580.511 - 10633.150: 95.0721% ( 21) 00:08:53.800 10633.150 - 10685.790: 95.1623% ( 12) 00:08:53.800 10685.790 - 10738.429: 95.2524% ( 12) 00:08:53.800 10738.429 - 10791.068: 95.3350% ( 11) 00:08:53.800 10791.068 - 10843.708: 95.4477% ( 15) 00:08:53.800 10843.708 - 10896.347: 95.5228% ( 10) 00:08:53.800 10896.347 - 10948.986: 95.6430% ( 16) 00:08:53.800 10948.986 - 11001.626: 95.7031% ( 8) 00:08:53.800 11001.626 - 11054.265: 95.7332% ( 4) 00:08:53.800 11054.265 - 11106.904: 95.7632% ( 4) 00:08:53.801 11106.904 - 11159.544: 95.7933% ( 4) 00:08:53.801 11159.544 - 11212.183: 95.8158% ( 3) 00:08:53.801 11212.183 - 11264.822: 95.8609% ( 6) 00:08:53.801 11264.822 - 11317.462: 95.8834% ( 3) 00:08:53.801 11317.462 - 11370.101: 95.9135% ( 4) 00:08:53.801 11370.101 - 11422.741: 95.9736% ( 8) 00:08:53.801 11422.741 - 11475.380: 96.0111% ( 5) 00:08:53.801 11475.380 - 11528.019: 96.0862% ( 10) 00:08:53.801 11528.019 - 11580.659: 96.1163% ( 4) 00:08:53.801 11580.659 - 11633.298: 96.1538% ( 5) 00:08:53.801 11633.298 - 11685.937: 96.1914% ( 5) 00:08:53.801 11685.937 - 11738.577: 96.2139% ( 3) 00:08:53.801 11738.577 - 11791.216: 96.2515% ( 5) 00:08:53.801 11791.216 - 11843.855: 96.2816% ( 4) 00:08:53.801 11843.855 - 11896.495: 96.3041% ( 3) 00:08:53.801 11896.495 - 11949.134: 96.3266% ( 3) 00:08:53.801 11949.134 - 12001.773: 96.3416% ( 2) 00:08:53.801 12001.773 - 12054.413: 96.3567% ( 2) 00:08:53.801 12054.413 - 12107.052: 96.3792% ( 3) 00:08:53.801 12107.052 - 12159.692: 96.3942% ( 2) 00:08:53.801 12159.692 - 12212.331: 96.4093% ( 2) 00:08:53.801 12212.331 - 12264.970: 96.4168% ( 1) 00:08:53.801 12264.970 - 12317.610: 96.4393% ( 3) 00:08:53.801 12317.610 - 12370.249: 96.4543% ( 2) 00:08:53.801 12370.249 - 12422.888: 96.4694% ( 2) 00:08:53.801 12422.888 - 12475.528: 96.4919% ( 3) 00:08:53.801 12475.528 - 12528.167: 96.5069% ( 2) 00:08:53.801 12528.167 - 12580.806: 96.5144% ( 1) 00:08:53.801 12580.806 - 12633.446: 96.5294% ( 2) 00:08:53.801 12633.446 - 12686.085: 96.5520% ( 3) 00:08:53.801 12686.085 - 12738.724: 96.5595% ( 1) 00:08:53.801 12738.724 - 12791.364: 96.5820% ( 3) 00:08:53.801 12791.364 - 12844.003: 96.5971% ( 2) 00:08:53.801 12844.003 - 12896.643: 96.6121% ( 2) 00:08:53.801 12896.643 - 12949.282: 96.6271% ( 2) 00:08:53.801 12949.282 - 13001.921: 96.6346% ( 1) 00:08:53.801 13107.200 - 13159.839: 96.6421% ( 1) 00:08:53.801 13159.839 - 13212.479: 96.6722% ( 4) 00:08:53.801 13212.479 - 13265.118: 96.6947% ( 3) 00:08:53.801 13265.118 - 13317.757: 96.7248% ( 4) 00:08:53.801 13317.757 - 13370.397: 96.7473% ( 3) 00:08:53.801 13370.397 - 13423.036: 96.7698% ( 3) 00:08:53.801 13423.036 - 13475.676: 96.7999% ( 4) 00:08:53.801 13475.676 - 13580.954: 96.8675% ( 9) 00:08:53.801 13580.954 - 13686.233: 96.9426% ( 10) 00:08:53.801 13686.233 - 13791.512: 97.0252% ( 11) 00:08:53.801 13791.512 - 13896.790: 97.1079% ( 11) 00:08:53.801 13896.790 - 14002.069: 97.1605% ( 7) 00:08:53.801 14002.069 - 14107.348: 97.2356% ( 10) 00:08:53.801 14107.348 - 14212.627: 97.2731% ( 5) 00:08:53.801 14212.627 - 14317.905: 97.3032% ( 4) 00:08:53.801 14317.905 - 14423.184: 97.3257% ( 3) 00:08:53.801 14423.184 - 14528.463: 97.3558% ( 4) 00:08:53.801 14528.463 - 14633.741: 97.3783% ( 3) 00:08:53.801 14633.741 - 14739.020: 97.4159% ( 5) 00:08:53.801 14739.020 - 14844.299: 97.4985% ( 11) 00:08:53.801 14844.299 - 14949.578: 97.5436% ( 6) 00:08:53.801 14949.578 - 15054.856: 97.6112% ( 9) 00:08:53.801 15054.856 - 15160.135: 97.6788% ( 9) 00:08:53.801 15160.135 - 15265.414: 97.7539% ( 10) 00:08:53.801 15265.414 - 15370.692: 97.8140% ( 8) 00:08:53.801 15370.692 - 15475.971: 97.8666% ( 7) 00:08:53.801 15475.971 - 15581.250: 97.9117% ( 6) 00:08:53.801 15581.250 - 15686.529: 97.9492% ( 5) 00:08:53.801 15686.529 - 15791.807: 97.9868% ( 5) 00:08:53.801 15791.807 - 15897.086: 98.0168% ( 4) 00:08:53.801 15897.086 - 16002.365: 98.0619% ( 6) 00:08:53.801 16002.365 - 16107.643: 98.0769% ( 2) 00:08:53.801 17581.545 - 17686.824: 98.1070% ( 4) 00:08:53.801 17686.824 - 17792.103: 98.1445% ( 5) 00:08:53.801 17792.103 - 17897.382: 98.1746% ( 4) 00:08:53.801 17897.382 - 18002.660: 98.2046% ( 4) 00:08:53.801 18002.660 - 18107.939: 98.2347% ( 4) 00:08:53.801 18107.939 - 18213.218: 98.2722% ( 5) 00:08:53.801 18213.218 - 18318.496: 98.3023% ( 4) 00:08:53.801 18318.496 - 18423.775: 98.3398% ( 5) 00:08:53.801 18423.775 - 18529.054: 98.4150% ( 10) 00:08:53.801 18529.054 - 18634.333: 98.4826% ( 9) 00:08:53.801 18634.333 - 18739.611: 98.5427% ( 8) 00:08:53.801 18739.611 - 18844.890: 98.6178% ( 10) 00:08:53.801 18844.890 - 18950.169: 98.6854% ( 9) 00:08:53.801 18950.169 - 19055.447: 98.7530% ( 9) 00:08:53.801 19055.447 - 19160.726: 98.8206% ( 9) 00:08:53.801 19160.726 - 19266.005: 98.8507% ( 4) 00:08:53.801 19266.005 - 19371.284: 98.8732% ( 3) 00:08:53.801 19371.284 - 19476.562: 98.9032% ( 4) 00:08:53.801 19476.562 - 19581.841: 98.9258% ( 3) 00:08:53.801 19581.841 - 19687.120: 98.9558% ( 4) 00:08:53.801 19687.120 - 19792.398: 98.9859% ( 4) 00:08:53.801 19792.398 - 19897.677: 99.0159% ( 4) 00:08:53.801 19897.677 - 20002.956: 99.0385% ( 3) 00:08:53.801 34741.976 - 34952.533: 99.0460% ( 1) 00:08:53.801 34952.533 - 35163.091: 99.0910% ( 6) 00:08:53.801 35163.091 - 35373.648: 99.1511% ( 8) 00:08:53.801 35373.648 - 35584.206: 99.2037% ( 7) 00:08:53.801 35584.206 - 35794.763: 99.2563% ( 7) 00:08:53.801 35794.763 - 36005.320: 99.3014% ( 6) 00:08:53.801 36005.320 - 36215.878: 99.3615% ( 8) 00:08:53.801 36215.878 - 36426.435: 99.4291% ( 9) 00:08:53.801 36426.435 - 36636.993: 99.4742% ( 6) 00:08:53.801 36636.993 - 36847.550: 99.5192% ( 6) 00:08:53.801 43164.273 - 43374.831: 99.5568% ( 5) 00:08:53.801 43374.831 - 43585.388: 99.6094% ( 7) 00:08:53.801 43585.388 - 43795.945: 99.6620% ( 7) 00:08:53.801 43795.945 - 44006.503: 99.7070% ( 6) 00:08:53.801 44006.503 - 44217.060: 99.7521% ( 6) 00:08:53.801 44217.060 - 44427.618: 99.7972% ( 6) 00:08:53.801 44427.618 - 44638.175: 99.8573% ( 8) 00:08:53.801 44638.175 - 44848.733: 99.9099% ( 7) 00:08:53.801 44848.733 - 45059.290: 99.9624% ( 7) 00:08:53.801 45059.290 - 45269.847: 100.0000% ( 5) 00:08:53.801 00:08:53.801 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:53.801 ============================================================================== 00:08:53.801 Range in us Cumulative IO count 00:08:53.801 8369.658 - 8422.297: 0.2254% ( 30) 00:08:53.801 8422.297 - 8474.937: 0.7737% ( 73) 00:08:53.801 8474.937 - 8527.576: 1.6226% ( 113) 00:08:53.801 8527.576 - 8580.215: 2.9072% ( 171) 00:08:53.801 8580.215 - 8632.855: 4.8227% ( 255) 00:08:53.801 8632.855 - 8685.494: 7.4144% ( 345) 00:08:53.801 8685.494 - 8738.133: 10.7197% ( 440) 00:08:53.801 8738.133 - 8790.773: 14.4681% ( 499) 00:08:53.801 8790.773 - 8843.412: 18.7876% ( 575) 00:08:53.801 8843.412 - 8896.051: 23.2873% ( 599) 00:08:53.801 8896.051 - 8948.691: 27.9672% ( 623) 00:08:53.801 8948.691 - 9001.330: 32.7900% ( 642) 00:08:53.801 9001.330 - 9053.969: 37.8155% ( 669) 00:08:53.801 9053.969 - 9106.609: 42.7809% ( 661) 00:08:53.801 9106.609 - 9159.248: 47.9567% ( 689) 00:08:53.801 9159.248 - 9211.888: 53.0499% ( 678) 00:08:53.801 9211.888 - 9264.527: 58.3909% ( 711) 00:08:53.801 9264.527 - 9317.166: 63.4691% ( 676) 00:08:53.801 9317.166 - 9369.806: 68.1941% ( 629) 00:08:53.801 9369.806 - 9422.445: 72.7163% ( 602) 00:08:53.801 9422.445 - 9475.084: 76.6752% ( 527) 00:08:53.801 9475.084 - 9527.724: 79.8978% ( 429) 00:08:53.801 9527.724 - 9580.363: 82.6698% ( 369) 00:08:53.801 9580.363 - 9633.002: 84.7806% ( 281) 00:08:53.801 9633.002 - 9685.642: 86.4934% ( 228) 00:08:53.801 9685.642 - 9738.281: 87.8756% ( 184) 00:08:53.801 9738.281 - 9790.920: 89.0325% ( 154) 00:08:53.801 9790.920 - 9843.560: 89.9940% ( 128) 00:08:53.801 9843.560 - 9896.199: 90.8128% ( 109) 00:08:53.801 9896.199 - 9948.839: 91.4663% ( 87) 00:08:53.801 9948.839 - 10001.478: 92.0748% ( 81) 00:08:53.801 10001.478 - 10054.117: 92.6232% ( 73) 00:08:53.801 10054.117 - 10106.757: 93.1340% ( 68) 00:08:53.801 10106.757 - 10159.396: 93.5096% ( 50) 00:08:53.801 10159.396 - 10212.035: 93.7725% ( 35) 00:08:53.801 10212.035 - 10264.675: 94.0355% ( 35) 00:08:53.801 10264.675 - 10317.314: 94.2608% ( 30) 00:08:53.801 10317.314 - 10369.953: 94.4486% ( 25) 00:08:53.801 10369.953 - 10422.593: 94.6364% ( 25) 00:08:53.801 10422.593 - 10475.232: 94.7867% ( 20) 00:08:53.801 10475.232 - 10527.871: 94.9294% ( 19) 00:08:53.801 10527.871 - 10580.511: 95.0721% ( 19) 00:08:53.801 10580.511 - 10633.150: 95.1773% ( 14) 00:08:53.801 10633.150 - 10685.790: 95.2299% ( 7) 00:08:53.801 10685.790 - 10738.429: 95.3050% ( 10) 00:08:53.801 10738.429 - 10791.068: 95.3951% ( 12) 00:08:53.801 10791.068 - 10843.708: 95.4853% ( 12) 00:08:53.801 10843.708 - 10896.347: 95.5529% ( 9) 00:08:53.801 10896.347 - 10948.986: 95.5904% ( 5) 00:08:53.801 10948.986 - 11001.626: 95.6280% ( 5) 00:08:53.801 11001.626 - 11054.265: 95.6656% ( 5) 00:08:53.801 11054.265 - 11106.904: 95.7106% ( 6) 00:08:53.801 11106.904 - 11159.544: 95.7632% ( 7) 00:08:53.801 11159.544 - 11212.183: 95.7858% ( 3) 00:08:53.801 11212.183 - 11264.822: 95.8233% ( 5) 00:08:53.801 11264.822 - 11317.462: 95.8609% ( 5) 00:08:53.801 11317.462 - 11370.101: 95.8834% ( 3) 00:08:53.801 11370.101 - 11422.741: 95.9135% ( 4) 00:08:53.801 11422.741 - 11475.380: 95.9510% ( 5) 00:08:53.801 11475.380 - 11528.019: 95.9961% ( 6) 00:08:53.801 11528.019 - 11580.659: 96.0337% ( 5) 00:08:53.801 11580.659 - 11633.298: 96.0712% ( 5) 00:08:53.801 11633.298 - 11685.937: 96.1088% ( 5) 00:08:53.801 11685.937 - 11738.577: 96.1463% ( 5) 00:08:53.801 11738.577 - 11791.216: 96.1914% ( 6) 00:08:53.801 11791.216 - 11843.855: 96.2290% ( 5) 00:08:53.801 11843.855 - 11896.495: 96.2665% ( 5) 00:08:53.801 11896.495 - 11949.134: 96.3041% ( 5) 00:08:53.801 11949.134 - 12001.773: 96.3266% ( 3) 00:08:53.801 12001.773 - 12054.413: 96.3642% ( 5) 00:08:53.801 12054.413 - 12107.052: 96.4093% ( 6) 00:08:53.801 12107.052 - 12159.692: 96.4543% ( 6) 00:08:53.801 12159.692 - 12212.331: 96.4769% ( 3) 00:08:53.801 12212.331 - 12264.970: 96.4994% ( 3) 00:08:53.801 12264.970 - 12317.610: 96.5219% ( 3) 00:08:53.801 12317.610 - 12370.249: 96.5370% ( 2) 00:08:53.801 12370.249 - 12422.888: 96.5595% ( 3) 00:08:53.801 12422.888 - 12475.528: 96.5820% ( 3) 00:08:53.801 12475.528 - 12528.167: 96.5971% ( 2) 00:08:53.801 12528.167 - 12580.806: 96.6196% ( 3) 00:08:53.802 12580.806 - 12633.446: 96.6346% ( 2) 00:08:53.802 13107.200 - 13159.839: 96.6647% ( 4) 00:08:53.802 13159.839 - 13212.479: 96.6947% ( 4) 00:08:53.802 13212.479 - 13265.118: 96.7323% ( 5) 00:08:53.802 13265.118 - 13317.757: 96.7623% ( 4) 00:08:53.802 13317.757 - 13370.397: 96.7924% ( 4) 00:08:53.802 13370.397 - 13423.036: 96.8224% ( 4) 00:08:53.802 13423.036 - 13475.676: 96.8525% ( 4) 00:08:53.802 13475.676 - 13580.954: 96.9201% ( 9) 00:08:53.802 13580.954 - 13686.233: 97.0102% ( 12) 00:08:53.802 13686.233 - 13791.512: 97.1079% ( 13) 00:08:53.802 13791.512 - 13896.790: 97.2055% ( 13) 00:08:53.802 13896.790 - 14002.069: 97.2581% ( 7) 00:08:53.802 14002.069 - 14107.348: 97.2806% ( 3) 00:08:53.802 14107.348 - 14212.627: 97.3182% ( 5) 00:08:53.802 14212.627 - 14317.905: 97.3483% ( 4) 00:08:53.802 14317.905 - 14423.184: 97.3858% ( 5) 00:08:53.802 14423.184 - 14528.463: 97.4234% ( 5) 00:08:53.802 14528.463 - 14633.741: 97.4534% ( 4) 00:08:53.802 14633.741 - 14739.020: 97.4910% ( 5) 00:08:53.802 14739.020 - 14844.299: 97.5210% ( 4) 00:08:53.802 14844.299 - 14949.578: 97.5586% ( 5) 00:08:53.802 14949.578 - 15054.856: 97.5886% ( 4) 00:08:53.802 15054.856 - 15160.135: 97.5962% ( 1) 00:08:53.802 15686.529 - 15791.807: 97.6412% ( 6) 00:08:53.802 15791.807 - 15897.086: 97.7013% ( 8) 00:08:53.802 15897.086 - 16002.365: 97.7464% ( 6) 00:08:53.802 16002.365 - 16107.643: 97.7915% ( 6) 00:08:53.802 16107.643 - 16212.922: 97.8441% ( 7) 00:08:53.802 16212.922 - 16318.201: 97.8966% ( 7) 00:08:53.802 16318.201 - 16423.480: 97.9492% ( 7) 00:08:53.802 16423.480 - 16528.758: 98.0093% ( 8) 00:08:53.802 16528.758 - 16634.037: 98.0619% ( 7) 00:08:53.802 16634.037 - 16739.316: 98.0769% ( 2) 00:08:53.802 16844.594 - 16949.873: 98.1070% ( 4) 00:08:53.802 16949.873 - 17055.152: 98.1445% ( 5) 00:08:53.802 17055.152 - 17160.431: 98.1821% ( 5) 00:08:53.802 17160.431 - 17265.709: 98.2272% ( 6) 00:08:53.802 17265.709 - 17370.988: 98.2647% ( 5) 00:08:53.802 17370.988 - 17476.267: 98.3098% ( 6) 00:08:53.802 17476.267 - 17581.545: 98.3474% ( 5) 00:08:53.802 17581.545 - 17686.824: 98.3774% ( 4) 00:08:53.802 17686.824 - 17792.103: 98.4225% ( 6) 00:08:53.802 17792.103 - 17897.382: 98.4675% ( 6) 00:08:53.802 17897.382 - 18002.660: 98.5126% ( 6) 00:08:53.802 18002.660 - 18107.939: 98.5502% ( 5) 00:08:53.802 18107.939 - 18213.218: 98.5577% ( 1) 00:08:53.802 18739.611 - 18844.890: 98.5877% ( 4) 00:08:53.802 18844.890 - 18950.169: 98.6328% ( 6) 00:08:53.802 18950.169 - 19055.447: 98.6779% ( 6) 00:08:53.802 19055.447 - 19160.726: 98.7154% ( 5) 00:08:53.802 19160.726 - 19266.005: 98.7605% ( 6) 00:08:53.802 19266.005 - 19371.284: 98.8056% ( 6) 00:08:53.802 19371.284 - 19476.562: 98.8431% ( 5) 00:08:53.802 19476.562 - 19581.841: 98.8882% ( 6) 00:08:53.802 19581.841 - 19687.120: 98.9258% ( 5) 00:08:53.802 19687.120 - 19792.398: 98.9633% ( 5) 00:08:53.802 19792.398 - 19897.677: 99.0009% ( 5) 00:08:53.802 19897.677 - 20002.956: 99.0385% ( 5) 00:08:53.802 33268.074 - 33478.631: 99.0760% ( 5) 00:08:53.802 33478.631 - 33689.189: 99.1361% ( 8) 00:08:53.802 33689.189 - 33899.746: 99.1887% ( 7) 00:08:53.802 33899.746 - 34110.304: 99.2413% ( 7) 00:08:53.802 34110.304 - 34320.861: 99.3014% ( 8) 00:08:53.802 34320.861 - 34531.418: 99.3540% ( 7) 00:08:53.802 34531.418 - 34741.976: 99.4141% ( 8) 00:08:53.802 34741.976 - 34952.533: 99.4742% ( 8) 00:08:53.802 34952.533 - 35163.091: 99.5192% ( 6) 00:08:53.802 41479.814 - 41690.371: 99.5418% ( 3) 00:08:53.802 41690.371 - 41900.929: 99.5944% ( 7) 00:08:53.802 41900.929 - 42111.486: 99.6469% ( 7) 00:08:53.802 42111.486 - 42322.043: 99.7145% ( 9) 00:08:53.802 42322.043 - 42532.601: 99.7671% ( 7) 00:08:53.802 42532.601 - 42743.158: 99.8272% ( 8) 00:08:53.802 42743.158 - 42953.716: 99.8873% ( 8) 00:08:53.802 42953.716 - 43164.273: 99.9474% ( 8) 00:08:53.802 43164.273 - 43374.831: 100.0000% ( 7) 00:08:53.802 00:08:53.802 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:53.802 ============================================================================== 00:08:53.802 Range in us Cumulative IO count 00:08:53.802 8317.018 - 8369.658: 0.0075% ( 1) 00:08:53.802 8369.658 - 8422.297: 0.1878% ( 24) 00:08:53.802 8422.297 - 8474.937: 0.6761% ( 65) 00:08:53.802 8474.937 - 8527.576: 1.3897% ( 95) 00:08:53.802 8527.576 - 8580.215: 2.4940% ( 147) 00:08:53.802 8580.215 - 8632.855: 4.4321% ( 258) 00:08:53.802 8632.855 - 8685.494: 7.0688% ( 351) 00:08:53.802 8685.494 - 8738.133: 10.3666% ( 439) 00:08:53.802 8738.133 - 8790.773: 14.0249% ( 487) 00:08:53.802 8790.773 - 8843.412: 18.5998% ( 609) 00:08:53.802 8843.412 - 8896.051: 23.1070% ( 600) 00:08:53.802 8896.051 - 8948.691: 27.8696% ( 634) 00:08:53.802 8948.691 - 9001.330: 32.7749% ( 653) 00:08:53.802 9001.330 - 9053.969: 37.5901% ( 641) 00:08:53.802 9053.969 - 9106.609: 42.7659% ( 689) 00:08:53.802 9106.609 - 9159.248: 48.1671% ( 719) 00:08:53.802 9159.248 - 9211.888: 53.4856% ( 708) 00:08:53.802 9211.888 - 9264.527: 58.6764% ( 691) 00:08:53.802 9264.527 - 9317.166: 63.8672% ( 691) 00:08:53.802 9317.166 - 9369.806: 68.7275% ( 647) 00:08:53.802 9369.806 - 9422.445: 73.3398% ( 614) 00:08:53.802 9422.445 - 9475.084: 77.3438% ( 533) 00:08:53.802 9475.084 - 9527.724: 80.5288% ( 424) 00:08:53.802 9527.724 - 9580.363: 83.1656% ( 351) 00:08:53.802 9580.363 - 9633.002: 85.2614% ( 279) 00:08:53.802 9633.002 - 9685.642: 86.9441% ( 224) 00:08:53.802 9685.642 - 9738.281: 88.3413% ( 186) 00:08:53.802 9738.281 - 9790.920: 89.4306% ( 145) 00:08:53.802 9790.920 - 9843.560: 90.2794% ( 113) 00:08:53.802 9843.560 - 9896.199: 91.0757% ( 106) 00:08:53.802 9896.199 - 9948.839: 91.7668% ( 92) 00:08:53.802 9948.839 - 10001.478: 92.4279% ( 88) 00:08:53.802 10001.478 - 10054.117: 92.9011% ( 63) 00:08:53.802 10054.117 - 10106.757: 93.2918% ( 52) 00:08:53.802 10106.757 - 10159.396: 93.6073% ( 42) 00:08:53.802 10159.396 - 10212.035: 93.9002% ( 39) 00:08:53.802 10212.035 - 10264.675: 94.1632% ( 35) 00:08:53.802 10264.675 - 10317.314: 94.3810% ( 29) 00:08:53.802 10317.314 - 10369.953: 94.5388% ( 21) 00:08:53.802 10369.953 - 10422.593: 94.6815% ( 19) 00:08:53.802 10422.593 - 10475.232: 94.8317% ( 20) 00:08:53.802 10475.232 - 10527.871: 94.9745% ( 19) 00:08:53.802 10527.871 - 10580.511: 95.1097% ( 18) 00:08:53.802 10580.511 - 10633.150: 95.1998% ( 12) 00:08:53.802 10633.150 - 10685.790: 95.2825% ( 11) 00:08:53.802 10685.790 - 10738.429: 95.3876% ( 14) 00:08:53.802 10738.429 - 10791.068: 95.4928% ( 14) 00:08:53.802 10791.068 - 10843.708: 95.5904% ( 13) 00:08:53.802 10843.708 - 10896.347: 95.6505% ( 8) 00:08:53.802 10896.347 - 10948.986: 95.6956% ( 6) 00:08:53.802 10948.986 - 11001.626: 95.7482% ( 7) 00:08:53.802 11001.626 - 11054.265: 95.8083% ( 8) 00:08:53.802 11054.265 - 11106.904: 95.8684% ( 8) 00:08:53.802 11106.904 - 11159.544: 95.9360% ( 9) 00:08:53.802 11159.544 - 11212.183: 95.9736% ( 5) 00:08:53.802 11212.183 - 11264.822: 96.0111% ( 5) 00:08:53.802 11264.822 - 11317.462: 96.0562% ( 6) 00:08:53.802 11317.462 - 11370.101: 96.0938% ( 5) 00:08:53.802 11370.101 - 11422.741: 96.1313% ( 5) 00:08:53.802 11422.741 - 11475.380: 96.1689% ( 5) 00:08:53.802 11475.380 - 11528.019: 96.2064% ( 5) 00:08:53.802 11528.019 - 11580.659: 96.2515% ( 6) 00:08:53.802 11580.659 - 11633.298: 96.2816% ( 4) 00:08:53.802 11633.298 - 11685.937: 96.3191% ( 5) 00:08:53.802 11685.937 - 11738.577: 96.3642% ( 6) 00:08:53.802 11738.577 - 11791.216: 96.4017% ( 5) 00:08:53.802 11791.216 - 11843.855: 96.4393% ( 5) 00:08:53.802 11843.855 - 11896.495: 96.4694% ( 4) 00:08:53.802 11896.495 - 11949.134: 96.5069% ( 5) 00:08:53.802 11949.134 - 12001.773: 96.5370% ( 4) 00:08:53.802 12001.773 - 12054.413: 96.5595% ( 3) 00:08:53.802 12054.413 - 12107.052: 96.5745% ( 2) 00:08:53.802 12107.052 - 12159.692: 96.5895% ( 2) 00:08:53.802 12159.692 - 12212.331: 96.6121% ( 3) 00:08:53.802 12212.331 - 12264.970: 96.6271% ( 2) 00:08:53.802 12264.970 - 12317.610: 96.6346% ( 1) 00:08:53.802 12686.085 - 12738.724: 96.6421% ( 1) 00:08:53.802 12738.724 - 12791.364: 96.6496% ( 1) 00:08:53.802 12791.364 - 12844.003: 96.6722% ( 3) 00:08:53.802 12844.003 - 12896.643: 96.7022% ( 4) 00:08:53.802 12896.643 - 12949.282: 96.7172% ( 2) 00:08:53.802 12949.282 - 13001.921: 96.7398% ( 3) 00:08:53.802 13001.921 - 13054.561: 96.7548% ( 2) 00:08:53.802 13054.561 - 13107.200: 96.7849% ( 4) 00:08:53.802 13107.200 - 13159.839: 96.8224% ( 5) 00:08:53.802 13159.839 - 13212.479: 96.8825% ( 8) 00:08:53.802 13212.479 - 13265.118: 96.9276% ( 6) 00:08:53.802 13265.118 - 13317.757: 96.9802% ( 7) 00:08:53.802 13317.757 - 13370.397: 97.0252% ( 6) 00:08:53.802 13370.397 - 13423.036: 97.0778% ( 7) 00:08:53.802 13423.036 - 13475.676: 97.1229% ( 6) 00:08:53.802 13475.676 - 13580.954: 97.2206% ( 13) 00:08:53.802 13580.954 - 13686.233: 97.3182% ( 13) 00:08:53.802 13686.233 - 13791.512: 97.4159% ( 13) 00:08:53.802 13791.512 - 13896.790: 97.5060% ( 12) 00:08:53.802 13896.790 - 14002.069: 97.5661% ( 8) 00:08:53.802 14002.069 - 14107.348: 97.5962% ( 4) 00:08:53.802 15791.807 - 15897.086: 97.6037% ( 1) 00:08:53.802 15897.086 - 16002.365: 97.6412% ( 5) 00:08:53.802 16002.365 - 16107.643: 97.6788% ( 5) 00:08:53.802 16107.643 - 16212.922: 97.7163% ( 5) 00:08:53.802 16212.922 - 16318.201: 97.7539% ( 5) 00:08:53.802 16318.201 - 16423.480: 97.7915% ( 5) 00:08:53.802 16423.480 - 16528.758: 97.8365% ( 6) 00:08:53.802 16528.758 - 16634.037: 97.8741% ( 5) 00:08:53.802 16634.037 - 16739.316: 97.9117% ( 5) 00:08:53.802 16739.316 - 16844.594: 97.9943% ( 11) 00:08:53.802 16844.594 - 16949.873: 98.0919% ( 13) 00:08:53.802 16949.873 - 17055.152: 98.1896% ( 13) 00:08:53.802 17055.152 - 17160.431: 98.2797% ( 12) 00:08:53.802 17160.431 - 17265.709: 98.3323% ( 7) 00:08:53.803 17265.709 - 17370.988: 98.3849% ( 7) 00:08:53.803 17370.988 - 17476.267: 98.4450% ( 8) 00:08:53.803 17476.267 - 17581.545: 98.4976% ( 7) 00:08:53.803 17581.545 - 17686.824: 98.5427% ( 6) 00:08:53.803 17686.824 - 17792.103: 98.5577% ( 2) 00:08:53.803 19371.284 - 19476.562: 98.5877% ( 4) 00:08:53.803 19476.562 - 19581.841: 98.6253% ( 5) 00:08:53.803 19581.841 - 19687.120: 98.6779% ( 7) 00:08:53.803 19687.120 - 19792.398: 98.7154% ( 5) 00:08:53.803 19792.398 - 19897.677: 98.7455% ( 4) 00:08:53.803 19897.677 - 20002.956: 98.7831% ( 5) 00:08:53.803 20002.956 - 20108.235: 98.8281% ( 6) 00:08:53.803 20108.235 - 20213.513: 98.8732% ( 6) 00:08:53.803 20213.513 - 20318.792: 98.9108% ( 5) 00:08:53.803 20318.792 - 20424.071: 98.9558% ( 6) 00:08:53.803 20424.071 - 20529.349: 98.9934% ( 5) 00:08:53.803 20529.349 - 20634.628: 99.0385% ( 6) 00:08:53.803 32215.287 - 32425.844: 99.0835% ( 6) 00:08:53.803 32425.844 - 32636.402: 99.1361% ( 7) 00:08:53.803 32636.402 - 32846.959: 99.2037% ( 9) 00:08:53.803 32846.959 - 33057.516: 99.2563% ( 7) 00:08:53.803 33057.516 - 33268.074: 99.3239% ( 9) 00:08:53.803 33268.074 - 33478.631: 99.3840% ( 8) 00:08:53.803 33478.631 - 33689.189: 99.4366% ( 7) 00:08:53.803 33689.189 - 33899.746: 99.4892% ( 7) 00:08:53.803 33899.746 - 34110.304: 99.5192% ( 4) 00:08:53.803 40427.027 - 40637.584: 99.5418% ( 3) 00:08:53.803 40637.584 - 40848.141: 99.6019% ( 8) 00:08:53.803 40848.141 - 41058.699: 99.6620% ( 8) 00:08:53.803 41058.699 - 41269.256: 99.7221% ( 8) 00:08:53.803 41269.256 - 41479.814: 99.7746% ( 7) 00:08:53.803 41479.814 - 41690.371: 99.8272% ( 7) 00:08:53.803 41690.371 - 41900.929: 99.8798% ( 7) 00:08:53.803 41900.929 - 42111.486: 99.9324% ( 7) 00:08:53.803 42111.486 - 42322.043: 100.0000% ( 9) 00:08:53.803 00:08:53.803 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:53.803 ============================================================================== 00:08:53.803 Range in us Cumulative IO count 00:08:53.803 8317.018 - 8369.658: 0.0075% ( 1) 00:08:53.803 8369.658 - 8422.297: 0.1728% ( 22) 00:08:53.803 8422.297 - 8474.937: 0.4733% ( 40) 00:08:53.803 8474.937 - 8527.576: 1.3447% ( 116) 00:08:53.803 8527.576 - 8580.215: 2.6743% ( 177) 00:08:53.803 8580.215 - 8632.855: 4.6725% ( 266) 00:08:53.803 8632.855 - 8685.494: 7.2716% ( 346) 00:08:53.803 8685.494 - 8738.133: 10.5469% ( 436) 00:08:53.803 8738.133 - 8790.773: 14.3104% ( 501) 00:08:53.803 8790.773 - 8843.412: 18.5171% ( 560) 00:08:53.803 8843.412 - 8896.051: 23.1445% ( 616) 00:08:53.803 8896.051 - 8948.691: 27.8245% ( 623) 00:08:53.803 8948.691 - 9001.330: 32.7299% ( 653) 00:08:53.803 9001.330 - 9053.969: 37.7554% ( 669) 00:08:53.803 9053.969 - 9106.609: 42.8260% ( 675) 00:08:53.803 9106.609 - 9159.248: 48.0243% ( 692) 00:08:53.803 9159.248 - 9211.888: 53.2602% ( 697) 00:08:53.803 9211.888 - 9264.527: 58.7740% ( 734) 00:08:53.803 9264.527 - 9317.166: 63.9423% ( 688) 00:08:53.803 9317.166 - 9369.806: 68.6974% ( 633) 00:08:53.803 9369.806 - 9422.445: 73.0319% ( 577) 00:08:53.803 9422.445 - 9475.084: 76.9832% ( 526) 00:08:53.803 9475.084 - 9527.724: 80.3936% ( 454) 00:08:53.803 9527.724 - 9580.363: 83.1280% ( 364) 00:08:53.803 9580.363 - 9633.002: 85.2239% ( 279) 00:08:53.803 9633.002 - 9685.642: 86.8690% ( 219) 00:08:53.803 9685.642 - 9738.281: 88.1911% ( 176) 00:08:53.803 9738.281 - 9790.920: 89.2503% ( 141) 00:08:53.803 9790.920 - 9843.560: 90.1442% ( 119) 00:08:53.803 9843.560 - 9896.199: 91.0231% ( 117) 00:08:53.803 9896.199 - 9948.839: 91.7293% ( 94) 00:08:53.803 9948.839 - 10001.478: 92.3302% ( 80) 00:08:53.803 10001.478 - 10054.117: 92.7659% ( 58) 00:08:53.803 10054.117 - 10106.757: 93.1566% ( 52) 00:08:53.803 10106.757 - 10159.396: 93.4721% ( 42) 00:08:53.803 10159.396 - 10212.035: 93.7575% ( 38) 00:08:53.803 10212.035 - 10264.675: 94.0430% ( 38) 00:08:53.803 10264.675 - 10317.314: 94.3059% ( 35) 00:08:53.803 10317.314 - 10369.953: 94.4862% ( 24) 00:08:53.803 10369.953 - 10422.593: 94.6289% ( 19) 00:08:53.803 10422.593 - 10475.232: 94.7791% ( 20) 00:08:53.803 10475.232 - 10527.871: 94.9294% ( 20) 00:08:53.803 10527.871 - 10580.511: 95.0496% ( 16) 00:08:53.803 10580.511 - 10633.150: 95.1623% ( 15) 00:08:53.803 10633.150 - 10685.790: 95.2674% ( 14) 00:08:53.803 10685.790 - 10738.429: 95.3651% ( 13) 00:08:53.803 10738.429 - 10791.068: 95.4627% ( 13) 00:08:53.803 10791.068 - 10843.708: 95.5679% ( 14) 00:08:53.803 10843.708 - 10896.347: 95.6280% ( 8) 00:08:53.803 10896.347 - 10948.986: 95.7031% ( 10) 00:08:53.803 10948.986 - 11001.626: 95.7557% ( 7) 00:08:53.803 11001.626 - 11054.265: 95.8158% ( 8) 00:08:53.803 11054.265 - 11106.904: 95.8759% ( 8) 00:08:53.803 11106.904 - 11159.544: 95.9210% ( 6) 00:08:53.803 11159.544 - 11212.183: 95.9660% ( 6) 00:08:53.803 11212.183 - 11264.822: 96.0111% ( 6) 00:08:53.803 11264.822 - 11317.462: 96.0487% ( 5) 00:08:53.803 11317.462 - 11370.101: 96.0787% ( 4) 00:08:53.803 11370.101 - 11422.741: 96.1163% ( 5) 00:08:53.803 11422.741 - 11475.380: 96.1614% ( 6) 00:08:53.803 11475.380 - 11528.019: 96.1989% ( 5) 00:08:53.803 11528.019 - 11580.659: 96.2365% ( 5) 00:08:53.803 11580.659 - 11633.298: 96.2740% ( 5) 00:08:53.803 11633.298 - 11685.937: 96.3191% ( 6) 00:08:53.803 11685.937 - 11738.577: 96.3492% ( 4) 00:08:53.803 11738.577 - 11791.216: 96.3942% ( 6) 00:08:53.803 11791.216 - 11843.855: 96.4393% ( 6) 00:08:53.803 11843.855 - 11896.495: 96.4694% ( 4) 00:08:53.803 11896.495 - 11949.134: 96.5069% ( 5) 00:08:53.803 11949.134 - 12001.773: 96.5445% ( 5) 00:08:53.803 12001.773 - 12054.413: 96.5670% ( 3) 00:08:53.803 12054.413 - 12107.052: 96.5820% ( 2) 00:08:53.803 12107.052 - 12159.692: 96.5971% ( 2) 00:08:53.803 12159.692 - 12212.331: 96.6196% ( 3) 00:08:53.803 12212.331 - 12264.970: 96.6346% ( 2) 00:08:53.803 12264.970 - 12317.610: 96.6496% ( 2) 00:08:53.803 12317.610 - 12370.249: 96.6647% ( 2) 00:08:53.803 12370.249 - 12422.888: 96.6872% ( 3) 00:08:53.803 12422.888 - 12475.528: 96.7022% ( 2) 00:08:53.803 12475.528 - 12528.167: 96.7248% ( 3) 00:08:53.803 12528.167 - 12580.806: 96.7398% ( 2) 00:08:53.803 12580.806 - 12633.446: 96.7623% ( 3) 00:08:53.803 12633.446 - 12686.085: 96.7849% ( 3) 00:08:53.803 12686.085 - 12738.724: 96.7999% ( 2) 00:08:53.803 12738.724 - 12791.364: 96.8224% ( 3) 00:08:53.803 12791.364 - 12844.003: 96.8374% ( 2) 00:08:53.803 12844.003 - 12896.643: 96.8600% ( 3) 00:08:53.803 12896.643 - 12949.282: 96.8750% ( 2) 00:08:53.803 12949.282 - 13001.921: 96.8975% ( 3) 00:08:53.803 13001.921 - 13054.561: 96.9126% ( 2) 00:08:53.803 13054.561 - 13107.200: 96.9576% ( 6) 00:08:53.803 13107.200 - 13159.839: 97.0027% ( 6) 00:08:53.803 13159.839 - 13212.479: 97.0478% ( 6) 00:08:53.803 13212.479 - 13265.118: 97.0928% ( 6) 00:08:53.803 13265.118 - 13317.757: 97.1379% ( 6) 00:08:53.803 13317.757 - 13370.397: 97.1905% ( 7) 00:08:53.804 13370.397 - 13423.036: 97.2206% ( 4) 00:08:53.804 13423.036 - 13475.676: 97.2356% ( 2) 00:08:53.804 13475.676 - 13580.954: 97.2656% ( 4) 00:08:53.804 13580.954 - 13686.233: 97.2806% ( 2) 00:08:53.804 14212.627 - 14317.905: 97.3032% ( 3) 00:08:53.804 14317.905 - 14423.184: 97.3483% ( 6) 00:08:53.804 14423.184 - 14528.463: 97.3933% ( 6) 00:08:53.804 14528.463 - 14633.741: 97.4384% ( 6) 00:08:53.804 14633.741 - 14739.020: 97.4835% ( 6) 00:08:53.804 14739.020 - 14844.299: 97.5285% ( 6) 00:08:53.804 14844.299 - 14949.578: 97.5736% ( 6) 00:08:53.804 14949.578 - 15054.856: 97.5962% ( 3) 00:08:53.804 15265.414 - 15370.692: 97.6112% ( 2) 00:08:53.804 15370.692 - 15475.971: 97.6487% ( 5) 00:08:53.804 15475.971 - 15581.250: 97.6863% ( 5) 00:08:53.804 15581.250 - 15686.529: 97.7314% ( 6) 00:08:53.804 15686.529 - 15791.807: 97.7689% ( 5) 00:08:53.804 15791.807 - 15897.086: 97.8140% ( 6) 00:08:53.804 15897.086 - 16002.365: 97.8516% ( 5) 00:08:53.804 16002.365 - 16107.643: 97.8966% ( 6) 00:08:53.804 16107.643 - 16212.922: 97.9342% ( 5) 00:08:53.804 16212.922 - 16318.201: 97.9718% ( 5) 00:08:53.804 16318.201 - 16423.480: 98.0168% ( 6) 00:08:53.804 16423.480 - 16528.758: 98.0619% ( 6) 00:08:53.804 16528.758 - 16634.037: 98.0769% ( 2) 00:08:53.804 17370.988 - 17476.267: 98.1145% ( 5) 00:08:53.804 17476.267 - 17581.545: 98.1671% ( 7) 00:08:53.804 17581.545 - 17686.824: 98.2272% ( 8) 00:08:53.804 17686.824 - 17792.103: 98.2797% ( 7) 00:08:53.804 17792.103 - 17897.382: 98.3323% ( 7) 00:08:53.804 17897.382 - 18002.660: 98.3774% ( 6) 00:08:53.804 18002.660 - 18107.939: 98.4375% ( 8) 00:08:53.804 18107.939 - 18213.218: 98.4901% ( 7) 00:08:53.804 18213.218 - 18318.496: 98.5427% ( 7) 00:08:53.804 18318.496 - 18423.775: 98.5577% ( 2) 00:08:53.804 18529.054 - 18634.333: 98.5802% ( 3) 00:08:53.804 18634.333 - 18739.611: 98.6103% ( 4) 00:08:53.804 18739.611 - 18844.890: 98.6478% ( 5) 00:08:53.804 18844.890 - 18950.169: 98.6779% ( 4) 00:08:53.804 18950.169 - 19055.447: 98.7154% ( 5) 00:08:53.804 19055.447 - 19160.726: 98.7455% ( 4) 00:08:53.804 19160.726 - 19266.005: 98.7755% ( 4) 00:08:53.804 19266.005 - 19371.284: 98.8206% ( 6) 00:08:53.804 19371.284 - 19476.562: 98.8507% ( 4) 00:08:53.804 19476.562 - 19581.841: 98.8732% ( 3) 00:08:53.804 20424.071 - 20529.349: 98.9108% ( 5) 00:08:53.804 20529.349 - 20634.628: 98.9483% ( 5) 00:08:53.804 20634.628 - 20739.907: 98.9934% ( 6) 00:08:53.804 20739.907 - 20845.186: 99.0309% ( 5) 00:08:53.804 20845.186 - 20950.464: 99.0385% ( 1) 00:08:53.804 30530.827 - 30741.385: 99.0910% ( 7) 00:08:53.804 30741.385 - 30951.942: 99.1436% ( 7) 00:08:53.804 30951.942 - 31162.500: 99.2112% ( 9) 00:08:53.804 31162.500 - 31373.057: 99.2638% ( 7) 00:08:53.804 31373.057 - 31583.614: 99.3239% ( 8) 00:08:53.804 31583.614 - 31794.172: 99.3840% ( 8) 00:08:53.804 31794.172 - 32004.729: 99.4441% ( 8) 00:08:53.804 32004.729 - 32215.287: 99.4817% ( 5) 00:08:53.804 32215.287 - 32425.844: 99.5192% ( 5) 00:08:53.804 38742.567 - 38953.124: 99.5643% ( 6) 00:08:53.804 38953.124 - 39163.682: 99.6244% ( 8) 00:08:53.804 39163.682 - 39374.239: 99.6845% ( 8) 00:08:53.804 39374.239 - 39584.797: 99.7371% ( 7) 00:08:53.804 39584.797 - 39795.354: 99.7897% ( 7) 00:08:53.804 39795.354 - 40005.912: 99.8422% ( 7) 00:08:53.804 40005.912 - 40216.469: 99.9099% ( 9) 00:08:53.804 40216.469 - 40427.027: 99.9624% ( 7) 00:08:53.804 40427.027 - 40637.584: 100.0000% ( 5) 00:08:53.804 00:08:53.804 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:53.804 ============================================================================== 00:08:53.804 Range in us Cumulative IO count 00:08:53.804 8317.018 - 8369.658: 0.0376% ( 5) 00:08:53.804 8369.658 - 8422.297: 0.1953% ( 21) 00:08:53.804 8422.297 - 8474.937: 0.5559% ( 48) 00:08:53.804 8474.937 - 8527.576: 1.3597% ( 107) 00:08:53.804 8527.576 - 8580.215: 2.8245% ( 195) 00:08:53.804 8580.215 - 8632.855: 5.0030% ( 290) 00:08:53.804 8632.855 - 8685.494: 7.4745% ( 329) 00:08:53.804 8685.494 - 8738.133: 10.7497% ( 436) 00:08:53.804 8738.133 - 8790.773: 14.5583% ( 507) 00:08:53.804 8790.773 - 8843.412: 18.7876% ( 563) 00:08:53.804 8843.412 - 8896.051: 23.1746% ( 584) 00:08:53.804 8896.051 - 8948.691: 27.9748% ( 639) 00:08:53.804 8948.691 - 9001.330: 32.7599% ( 637) 00:08:53.804 9001.330 - 9053.969: 37.8606% ( 679) 00:08:53.804 9053.969 - 9106.609: 43.0814% ( 695) 00:08:53.804 9106.609 - 9159.248: 48.2647% ( 690) 00:08:53.804 9159.248 - 9211.888: 53.5983% ( 710) 00:08:53.804 9211.888 - 9264.527: 58.7740% ( 689) 00:08:53.804 9264.527 - 9317.166: 63.9423% ( 688) 00:08:53.804 9317.166 - 9369.806: 68.7575% ( 641) 00:08:53.804 9369.806 - 9422.445: 72.9642% ( 560) 00:08:53.804 9422.445 - 9475.084: 76.7428% ( 503) 00:08:53.804 9475.084 - 9527.724: 79.9730% ( 430) 00:08:53.804 9527.724 - 9580.363: 82.6247% ( 353) 00:08:53.804 9580.363 - 9633.002: 84.9459% ( 309) 00:08:53.804 9633.002 - 9685.642: 86.6136% ( 222) 00:08:53.804 9685.642 - 9738.281: 87.9132% ( 173) 00:08:53.804 9738.281 - 9790.920: 89.0850% ( 156) 00:08:53.804 9790.920 - 9843.560: 90.0466% ( 128) 00:08:53.804 9843.560 - 9896.199: 90.8504% ( 107) 00:08:53.804 9896.199 - 9948.839: 91.5490% ( 93) 00:08:53.804 9948.839 - 10001.478: 92.0898% ( 72) 00:08:53.804 10001.478 - 10054.117: 92.5856% ( 66) 00:08:53.804 10054.117 - 10106.757: 92.9988% ( 55) 00:08:53.804 10106.757 - 10159.396: 93.3669% ( 49) 00:08:53.804 10159.396 - 10212.035: 93.6223% ( 34) 00:08:53.804 10212.035 - 10264.675: 93.9002% ( 37) 00:08:53.804 10264.675 - 10317.314: 94.1707% ( 36) 00:08:53.804 10317.314 - 10369.953: 94.4035% ( 31) 00:08:53.804 10369.953 - 10422.593: 94.6064% ( 27) 00:08:53.804 10422.593 - 10475.232: 94.7641% ( 21) 00:08:53.804 10475.232 - 10527.871: 94.9069% ( 19) 00:08:53.804 10527.871 - 10580.511: 95.0496% ( 19) 00:08:53.804 10580.511 - 10633.150: 95.1698% ( 16) 00:08:53.804 10633.150 - 10685.790: 95.2674% ( 13) 00:08:53.804 10685.790 - 10738.429: 95.3501% ( 11) 00:08:53.804 10738.429 - 10791.068: 95.4402% ( 12) 00:08:53.804 10791.068 - 10843.708: 95.5379% ( 13) 00:08:53.804 10843.708 - 10896.347: 95.5904% ( 7) 00:08:53.804 10896.347 - 10948.986: 95.6656% ( 10) 00:08:53.804 10948.986 - 11001.626: 95.7181% ( 7) 00:08:53.804 11001.626 - 11054.265: 95.7858% ( 9) 00:08:53.804 11054.265 - 11106.904: 95.8459% ( 8) 00:08:53.804 11106.904 - 11159.544: 95.8834% ( 5) 00:08:53.804 11159.544 - 11212.183: 95.9435% ( 8) 00:08:53.804 11212.183 - 11264.822: 95.9961% ( 7) 00:08:53.804 11264.822 - 11317.462: 96.0337% ( 5) 00:08:53.804 11317.462 - 11370.101: 96.0637% ( 4) 00:08:53.804 11370.101 - 11422.741: 96.1088% ( 6) 00:08:53.804 11422.741 - 11475.380: 96.1463% ( 5) 00:08:53.804 11475.380 - 11528.019: 96.1764% ( 4) 00:08:53.804 11528.019 - 11580.659: 96.2215% ( 6) 00:08:53.805 11580.659 - 11633.298: 96.2515% ( 4) 00:08:53.805 11633.298 - 11685.937: 96.2891% ( 5) 00:08:53.805 11685.937 - 11738.577: 96.3266% ( 5) 00:08:53.805 11738.577 - 11791.216: 96.3642% ( 5) 00:08:53.805 11791.216 - 11843.855: 96.3942% ( 4) 00:08:53.805 11843.855 - 11896.495: 96.4318% ( 5) 00:08:53.805 11896.495 - 11949.134: 96.4694% ( 5) 00:08:53.805 11949.134 - 12001.773: 96.5219% ( 7) 00:08:53.805 12001.773 - 12054.413: 96.5670% ( 6) 00:08:53.805 12054.413 - 12107.052: 96.5971% ( 4) 00:08:53.805 12107.052 - 12159.692: 96.6346% ( 5) 00:08:53.805 12159.692 - 12212.331: 96.6647% ( 4) 00:08:53.805 12212.331 - 12264.970: 96.7022% ( 5) 00:08:53.805 12264.970 - 12317.610: 96.7473% ( 6) 00:08:53.805 12317.610 - 12370.249: 96.7773% ( 4) 00:08:53.805 12370.249 - 12422.888: 96.8224% ( 6) 00:08:53.805 12422.888 - 12475.528: 96.8675% ( 6) 00:08:53.805 12475.528 - 12528.167: 96.9050% ( 5) 00:08:53.805 12528.167 - 12580.806: 96.9351% ( 4) 00:08:53.805 12580.806 - 12633.446: 96.9576% ( 3) 00:08:53.805 12633.446 - 12686.085: 96.9727% ( 2) 00:08:53.805 12686.085 - 12738.724: 96.9952% ( 3) 00:08:53.805 12738.724 - 12791.364: 97.0102% ( 2) 00:08:53.805 12791.364 - 12844.003: 97.0328% ( 3) 00:08:53.805 12844.003 - 12896.643: 97.0553% ( 3) 00:08:53.805 12896.643 - 12949.282: 97.0778% ( 3) 00:08:53.805 12949.282 - 13001.921: 97.1004% ( 3) 00:08:53.805 13001.921 - 13054.561: 97.1154% ( 2) 00:08:53.805 13580.954 - 13686.233: 97.1379% ( 3) 00:08:53.805 13686.233 - 13791.512: 97.1905% ( 7) 00:08:53.805 13791.512 - 13896.790: 97.2431% ( 7) 00:08:53.805 13896.790 - 14002.069: 97.2731% ( 4) 00:08:53.805 14002.069 - 14107.348: 97.3182% ( 6) 00:08:53.805 14107.348 - 14212.627: 97.3558% ( 5) 00:08:53.805 14212.627 - 14317.905: 97.4008% ( 6) 00:08:53.805 14317.905 - 14423.184: 97.4384% ( 5) 00:08:53.805 14423.184 - 14528.463: 97.4835% ( 6) 00:08:53.805 14528.463 - 14633.741: 97.5285% ( 6) 00:08:53.805 14633.741 - 14739.020: 97.5886% ( 8) 00:08:53.805 14739.020 - 14844.299: 97.6412% ( 7) 00:08:53.805 14844.299 - 14949.578: 97.6713% ( 4) 00:08:53.805 14949.578 - 15054.856: 97.7013% ( 4) 00:08:53.805 15054.856 - 15160.135: 97.7314% ( 4) 00:08:53.805 15160.135 - 15265.414: 97.7614% ( 4) 00:08:53.805 15265.414 - 15370.692: 97.7915% ( 4) 00:08:53.805 15370.692 - 15475.971: 97.8215% ( 4) 00:08:53.805 15475.971 - 15581.250: 97.8591% ( 5) 00:08:53.805 15581.250 - 15686.529: 97.8966% ( 5) 00:08:53.805 15686.529 - 15791.807: 97.9342% ( 5) 00:08:53.805 15791.807 - 15897.086: 97.9718% ( 5) 00:08:53.805 15897.086 - 16002.365: 98.0093% ( 5) 00:08:53.805 16002.365 - 16107.643: 98.0469% ( 5) 00:08:53.805 16107.643 - 16212.922: 98.0769% ( 4) 00:08:53.805 17686.824 - 17792.103: 98.0919% ( 2) 00:08:53.805 17792.103 - 17897.382: 98.1445% ( 7) 00:08:53.805 17897.382 - 18002.660: 98.1896% ( 6) 00:08:53.805 18002.660 - 18107.939: 98.2347% ( 6) 00:08:53.805 18107.939 - 18213.218: 98.2873% ( 7) 00:08:53.805 18213.218 - 18318.496: 98.3248% ( 5) 00:08:53.805 18318.496 - 18423.775: 98.3624% ( 5) 00:08:53.805 18423.775 - 18529.054: 98.3999% ( 5) 00:08:53.805 18529.054 - 18634.333: 98.4375% ( 5) 00:08:53.805 18634.333 - 18739.611: 98.4751% ( 5) 00:08:53.805 18739.611 - 18844.890: 98.5201% ( 6) 00:08:53.805 18844.890 - 18950.169: 98.5577% ( 5) 00:08:53.805 18950.169 - 19055.447: 98.5802% ( 3) 00:08:53.805 19055.447 - 19160.726: 98.6178% ( 5) 00:08:53.805 19160.726 - 19266.005: 98.6478% ( 4) 00:08:53.805 19266.005 - 19371.284: 98.6854% ( 5) 00:08:53.805 19371.284 - 19476.562: 98.7154% ( 4) 00:08:53.805 19476.562 - 19581.841: 98.7530% ( 5) 00:08:53.805 19581.841 - 19687.120: 98.7831% ( 4) 00:08:53.805 19687.120 - 19792.398: 98.8206% ( 5) 00:08:53.805 19792.398 - 19897.677: 98.8507% ( 4) 00:08:53.805 19897.677 - 20002.956: 98.8807% ( 4) 00:08:53.805 20002.956 - 20108.235: 98.9183% ( 5) 00:08:53.805 20108.235 - 20213.513: 98.9483% ( 4) 00:08:53.805 20213.513 - 20318.792: 98.9859% ( 5) 00:08:53.805 20318.792 - 20424.071: 99.0234% ( 5) 00:08:53.805 20424.071 - 20529.349: 99.0385% ( 2) 00:08:53.805 28846.368 - 29056.925: 99.0760% ( 5) 00:08:53.805 29056.925 - 29267.483: 99.1361% ( 8) 00:08:53.805 29267.483 - 29478.040: 99.1887% ( 7) 00:08:53.805 29478.040 - 29688.598: 99.2488% ( 8) 00:08:53.805 29688.598 - 29899.155: 99.3089% ( 8) 00:08:53.805 29899.155 - 30109.712: 99.3615% ( 7) 00:08:53.805 30109.712 - 30320.270: 99.4141% ( 7) 00:08:53.805 30320.270 - 30530.827: 99.4742% ( 8) 00:08:53.805 30530.827 - 30741.385: 99.5192% ( 6) 00:08:53.805 37058.108 - 37268.665: 99.5718% ( 7) 00:08:53.805 37268.665 - 37479.222: 99.6319% ( 8) 00:08:53.805 37479.222 - 37689.780: 99.6920% ( 8) 00:08:53.805 37689.780 - 37900.337: 99.7446% ( 7) 00:08:53.805 37900.337 - 38110.895: 99.8047% ( 8) 00:08:53.805 38110.895 - 38321.452: 99.8648% ( 8) 00:08:53.805 38321.452 - 38532.010: 99.9249% ( 8) 00:08:53.805 38532.010 - 38742.567: 99.9775% ( 7) 00:08:53.805 38742.567 - 38953.124: 100.0000% ( 3) 00:08:53.805 00:08:53.805 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:53.805 ============================================================================== 00:08:53.805 Range in us Cumulative IO count 00:08:53.805 8317.018 - 8369.658: 0.0150% ( 2) 00:08:53.805 8369.658 - 8422.297: 0.2093% ( 26) 00:08:53.805 8422.297 - 8474.937: 0.7327% ( 70) 00:08:53.805 8474.937 - 8527.576: 1.5550% ( 110) 00:08:53.806 8527.576 - 8580.215: 3.1624% ( 215) 00:08:53.806 8580.215 - 8632.855: 5.2931% ( 285) 00:08:53.806 8632.855 - 8685.494: 7.7153% ( 324) 00:08:53.806 8685.494 - 8738.133: 10.7207% ( 402) 00:08:53.806 8738.133 - 8790.773: 14.4139% ( 494) 00:08:53.806 8790.773 - 8843.412: 18.6304% ( 564) 00:08:53.806 8843.412 - 8896.051: 23.1833% ( 609) 00:08:53.806 8896.051 - 8948.691: 27.8783% ( 628) 00:08:53.806 8948.691 - 9001.330: 32.5807% ( 629) 00:08:53.806 9001.330 - 9053.969: 37.5972% ( 671) 00:08:53.806 9053.969 - 9106.609: 42.7557% ( 690) 00:08:53.806 9106.609 - 9159.248: 48.0114% ( 703) 00:08:53.806 9159.248 - 9211.888: 53.2297% ( 698) 00:08:53.806 9211.888 - 9264.527: 58.4255% ( 695) 00:08:53.806 9264.527 - 9317.166: 63.5766% ( 689) 00:08:53.806 9317.166 - 9369.806: 68.2865% ( 630) 00:08:53.806 9369.806 - 9422.445: 72.5553% ( 571) 00:08:53.806 9422.445 - 9475.084: 76.2410% ( 493) 00:08:53.806 9475.084 - 9527.724: 79.4632% ( 431) 00:08:53.806 9527.724 - 9580.363: 82.1471% ( 359) 00:08:53.806 9580.363 - 9633.002: 84.3900% ( 300) 00:08:53.806 9633.002 - 9685.642: 86.1094% ( 230) 00:08:53.806 9685.642 - 9738.281: 87.5748% ( 196) 00:08:53.806 9738.281 - 9790.920: 88.7186% ( 153) 00:08:53.806 9790.920 - 9843.560: 89.6755% ( 128) 00:08:53.806 9843.560 - 9896.199: 90.4830% ( 108) 00:08:53.806 9896.199 - 9948.839: 91.1334% ( 87) 00:08:53.806 9948.839 - 10001.478: 91.7016% ( 76) 00:08:53.806 10001.478 - 10054.117: 92.1950% ( 66) 00:08:53.806 10054.117 - 10106.757: 92.6211% ( 57) 00:08:53.806 10106.757 - 10159.396: 92.9501% ( 44) 00:08:53.806 10159.396 - 10212.035: 93.2192% ( 36) 00:08:53.806 10212.035 - 10264.675: 93.4435% ( 30) 00:08:53.806 10264.675 - 10317.314: 93.6827% ( 32) 00:08:53.806 10317.314 - 10369.953: 93.9219% ( 32) 00:08:53.806 10369.953 - 10422.593: 94.0939% ( 23) 00:08:53.806 10422.593 - 10475.232: 94.2584% ( 22) 00:08:53.806 10475.232 - 10527.871: 94.4303% ( 23) 00:08:53.806 10527.871 - 10580.511: 94.5798% ( 20) 00:08:53.806 10580.511 - 10633.150: 94.6995% ( 16) 00:08:53.806 10633.150 - 10685.790: 94.8116% ( 15) 00:08:53.806 10685.790 - 10738.429: 94.9237% ( 15) 00:08:53.806 10738.429 - 10791.068: 94.9836% ( 8) 00:08:53.806 10791.068 - 10843.708: 95.0658% ( 11) 00:08:53.806 10843.708 - 10896.347: 95.1480% ( 11) 00:08:53.806 10896.347 - 10948.986: 95.2004% ( 7) 00:08:53.806 10948.986 - 11001.626: 95.2751% ( 10) 00:08:53.806 11001.626 - 11054.265: 95.3200% ( 6) 00:08:53.806 11054.265 - 11106.904: 95.3648% ( 6) 00:08:53.806 11106.904 - 11159.544: 95.3947% ( 4) 00:08:53.806 11159.544 - 11212.183: 95.4396% ( 6) 00:08:53.806 11212.183 - 11264.822: 95.4770% ( 5) 00:08:53.806 11264.822 - 11317.462: 95.5144% ( 5) 00:08:53.806 11317.462 - 11370.101: 95.5368% ( 3) 00:08:53.806 11370.101 - 11422.741: 95.5742% ( 5) 00:08:53.806 11422.741 - 11475.380: 95.6190% ( 6) 00:08:53.806 11475.380 - 11528.019: 95.6714% ( 7) 00:08:53.806 11528.019 - 11580.659: 95.7087% ( 5) 00:08:53.806 11580.659 - 11633.298: 95.7461% ( 5) 00:08:53.806 11633.298 - 11685.937: 95.7984% ( 7) 00:08:53.806 11685.937 - 11738.577: 95.8358% ( 5) 00:08:53.806 11738.577 - 11791.216: 95.8732% ( 5) 00:08:53.806 11791.216 - 11843.855: 95.9255% ( 7) 00:08:53.806 11843.855 - 11896.495: 95.9629% ( 5) 00:08:53.806 11896.495 - 11949.134: 96.0003% ( 5) 00:08:53.806 11949.134 - 12001.773: 96.0302% ( 4) 00:08:53.806 12001.773 - 12054.413: 96.0751% ( 6) 00:08:53.806 12054.413 - 12107.052: 96.1050% ( 4) 00:08:53.806 12107.052 - 12159.692: 96.1423% ( 5) 00:08:53.806 12159.692 - 12212.331: 96.1797% ( 5) 00:08:53.806 12212.331 - 12264.970: 96.2096% ( 4) 00:08:53.806 12264.970 - 12317.610: 96.2545% ( 6) 00:08:53.806 12317.610 - 12370.249: 96.2844% ( 4) 00:08:53.806 12370.249 - 12422.888: 96.3292% ( 6) 00:08:53.806 12422.888 - 12475.528: 96.3592% ( 4) 00:08:53.806 12475.528 - 12528.167: 96.3965% ( 5) 00:08:53.806 12528.167 - 12580.806: 96.4339% ( 5) 00:08:53.806 12580.806 - 12633.446: 96.4713% ( 5) 00:08:53.806 12633.446 - 12686.085: 96.5161% ( 6) 00:08:53.806 12686.085 - 12738.724: 96.5610% ( 6) 00:08:53.806 12738.724 - 12791.364: 96.6358% ( 10) 00:08:53.806 12791.364 - 12844.003: 96.7180% ( 11) 00:08:53.806 12844.003 - 12896.643: 96.7629% ( 6) 00:08:53.806 12896.643 - 12949.282: 96.8002% ( 5) 00:08:53.806 12949.282 - 13001.921: 96.8376% ( 5) 00:08:53.806 13001.921 - 13054.561: 96.8675% ( 4) 00:08:53.806 13054.561 - 13107.200: 96.8974% ( 4) 00:08:53.806 13107.200 - 13159.839: 96.9273% ( 4) 00:08:53.806 13159.839 - 13212.479: 96.9647% ( 5) 00:08:53.806 13212.479 - 13265.118: 96.9946% ( 4) 00:08:53.806 13265.118 - 13317.757: 97.0320% ( 5) 00:08:53.806 13317.757 - 13370.397: 97.0619% ( 4) 00:08:53.806 13370.397 - 13423.036: 97.0993% ( 5) 00:08:53.806 13423.036 - 13475.676: 97.1217% ( 3) 00:08:53.806 13475.676 - 13580.954: 97.1292% ( 1) 00:08:53.806 14107.348 - 14212.627: 97.1516% ( 3) 00:08:53.806 14212.627 - 14317.905: 97.2189% ( 9) 00:08:53.806 14317.905 - 14423.184: 97.3086% ( 12) 00:08:53.806 14423.184 - 14528.463: 97.3983% ( 12) 00:08:53.806 14528.463 - 14633.741: 97.4806% ( 11) 00:08:53.806 14633.741 - 14739.020: 97.5478% ( 9) 00:08:53.806 14739.020 - 14844.299: 97.6151% ( 9) 00:08:53.806 14844.299 - 14949.578: 97.7048% ( 12) 00:08:53.806 14949.578 - 15054.856: 97.7946% ( 12) 00:08:53.806 15054.856 - 15160.135: 97.8917% ( 13) 00:08:53.806 15160.135 - 15265.414: 97.9665% ( 10) 00:08:53.806 15265.414 - 15370.692: 98.0039% ( 5) 00:08:53.806 15370.692 - 15475.971: 98.0263% ( 3) 00:08:53.806 15475.971 - 15581.250: 98.0637% ( 5) 00:08:53.806 15581.250 - 15686.529: 98.0861% ( 3) 00:08:53.806 18107.939 - 18213.218: 98.0936% ( 1) 00:08:53.806 18213.218 - 18318.496: 98.1385% ( 6) 00:08:53.806 18318.496 - 18423.775: 98.1758% ( 5) 00:08:53.806 18423.775 - 18529.054: 98.2356% ( 8) 00:08:53.806 18529.054 - 18634.333: 98.3254% ( 12) 00:08:53.806 18634.333 - 18739.611: 98.4076% ( 11) 00:08:53.806 18739.611 - 18844.890: 98.4898% ( 11) 00:08:53.806 18844.890 - 18950.169: 98.5646% ( 10) 00:08:53.806 18950.169 - 19055.447: 98.6468% ( 11) 00:08:53.806 19055.447 - 19160.726: 98.7365% ( 12) 00:08:53.806 19160.726 - 19266.005: 98.8113% ( 10) 00:08:53.806 19266.005 - 19371.284: 98.9010% ( 12) 00:08:53.806 19371.284 - 19476.562: 98.9459% ( 6) 00:08:53.806 19476.562 - 19581.841: 98.9907% ( 6) 00:08:53.806 19581.841 - 19687.120: 99.0281% ( 5) 00:08:53.806 19687.120 - 19792.398: 99.0431% ( 2) 00:08:53.806 20845.186 - 20950.464: 99.0505% ( 1) 00:08:53.806 20950.464 - 21055.743: 99.0804% ( 4) 00:08:53.806 21055.743 - 21161.022: 99.1103% ( 4) 00:08:53.806 21161.022 - 21266.300: 99.1328% ( 3) 00:08:53.806 21266.300 - 21371.579: 99.1627% ( 4) 00:08:53.806 21371.579 - 21476.858: 99.1926% ( 4) 00:08:53.806 21476.858 - 21582.137: 99.2225% ( 4) 00:08:53.806 21582.137 - 21687.415: 99.2524% ( 4) 00:08:53.806 21687.415 - 21792.694: 99.2748% ( 3) 00:08:53.806 21792.694 - 21897.973: 99.3047% ( 4) 00:08:53.807 21897.973 - 22003.251: 99.3346% ( 4) 00:08:53.807 22003.251 - 22108.530: 99.3645% ( 4) 00:08:53.807 22108.530 - 22213.809: 99.3944% ( 4) 00:08:53.807 22213.809 - 22319.088: 99.4169% ( 3) 00:08:53.807 22319.088 - 22424.366: 99.4468% ( 4) 00:08:53.807 22424.366 - 22529.645: 99.4767% ( 4) 00:08:53.807 22529.645 - 22634.924: 99.5066% ( 4) 00:08:53.807 22634.924 - 22740.202: 99.5215% ( 2) 00:08:53.807 28635.810 - 28846.368: 99.5739% ( 7) 00:08:53.807 28846.368 - 29056.925: 99.6262% ( 7) 00:08:53.807 29056.925 - 29267.483: 99.6860% ( 8) 00:08:53.807 29267.483 - 29478.040: 99.7458% ( 8) 00:08:53.807 29478.040 - 29688.598: 99.7907% ( 6) 00:08:53.807 29688.598 - 29899.155: 99.8430% ( 7) 00:08:53.807 29899.155 - 30109.712: 99.8953% ( 7) 00:08:53.807 30109.712 - 30320.270: 99.9551% ( 8) 00:08:53.807 30320.270 - 30530.827: 100.0000% ( 6) 00:08:53.807 00:08:53.807 15:13:36 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:55.195 Initializing NVMe Controllers 00:08:55.195 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:55.195 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:55.195 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:55.195 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:55.195 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:55.195 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:55.195 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:55.195 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:55.195 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:55.195 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:55.195 Initialization complete. Launching workers. 00:08:55.195 ======================================================== 00:08:55.195 Latency(us) 00:08:55.195 Device Information : IOPS MiB/s Average min max 00:08:55.195 PCIE (0000:00:10.0) NSID 1 from core 0: 12303.10 144.18 10427.89 6945.93 32821.49 00:08:55.195 PCIE (0000:00:11.0) NSID 1 from core 0: 12303.10 144.18 10411.60 7173.68 31306.54 00:08:55.195 PCIE (0000:00:13.0) NSID 1 from core 0: 12303.10 144.18 10395.43 7107.46 29363.33 00:08:55.195 PCIE (0000:00:12.0) NSID 1 from core 0: 12303.10 144.18 10379.11 7060.74 27962.38 00:08:55.195 PCIE (0000:00:12.0) NSID 2 from core 0: 12303.10 144.18 10362.90 7198.16 26217.45 00:08:55.195 PCIE (0000:00:12.0) NSID 3 from core 0: 12303.10 144.18 10347.23 7128.13 25516.07 00:08:55.195 ======================================================== 00:08:55.195 Total : 73818.57 865.06 10387.36 6945.93 32821.49 00:08:55.195 00:08:55.195 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:55.195 ================================================================================= 00:08:55.195 1.00000% : 7422.149us 00:08:55.195 10.00000% : 7843.264us 00:08:55.195 25.00000% : 8211.740us 00:08:55.195 50.00000% : 8896.051us 00:08:55.195 75.00000% : 13159.839us 00:08:55.195 90.00000% : 13896.790us 00:08:55.195 95.00000% : 14317.905us 00:08:55.195 98.00000% : 18002.660us 00:08:55.195 99.00000% : 22319.088us 00:08:55.195 99.50000% : 30741.385us 00:08:55.195 99.90000% : 32846.959us 00:08:55.195 99.99000% : 32846.959us 00:08:55.195 99.99900% : 32846.959us 00:08:55.195 99.99990% : 32846.959us 00:08:55.195 99.99999% : 32846.959us 00:08:55.195 00:08:55.195 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:55.195 ================================================================================= 00:08:55.195 1.00000% : 7474.789us 00:08:55.195 10.00000% : 7843.264us 00:08:55.195 25.00000% : 8159.100us 00:08:55.195 50.00000% : 8896.051us 00:08:55.195 75.00000% : 13212.479us 00:08:55.195 90.00000% : 13791.512us 00:08:55.195 95.00000% : 14107.348us 00:08:55.195 98.00000% : 18107.939us 00:08:55.195 99.00000% : 22634.924us 00:08:55.195 99.50000% : 29688.598us 00:08:55.195 99.90000% : 31162.500us 00:08:55.195 99.99000% : 31373.057us 00:08:55.195 99.99900% : 31373.057us 00:08:55.195 99.99990% : 31373.057us 00:08:55.195 99.99999% : 31373.057us 00:08:55.195 00:08:55.195 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:55.195 ================================================================================= 00:08:55.195 1.00000% : 7474.789us 00:08:55.195 10.00000% : 7790.625us 00:08:55.195 25.00000% : 8211.740us 00:08:55.195 50.00000% : 8843.412us 00:08:55.195 75.00000% : 13212.479us 00:08:55.195 90.00000% : 13686.233us 00:08:55.195 95.00000% : 14107.348us 00:08:55.195 98.00000% : 17581.545us 00:08:55.195 99.00000% : 22213.809us 00:08:55.195 99.50000% : 28004.138us 00:08:55.195 99.90000% : 29056.925us 00:08:55.195 99.99000% : 29478.040us 00:08:55.195 99.99900% : 29478.040us 00:08:55.195 99.99990% : 29478.040us 00:08:55.195 99.99999% : 29478.040us 00:08:55.195 00:08:55.195 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:55.195 ================================================================================= 00:08:55.195 1.00000% : 7474.789us 00:08:55.195 10.00000% : 7843.264us 00:08:55.195 25.00000% : 8211.740us 00:08:55.195 50.00000% : 8843.412us 00:08:55.195 75.00000% : 13212.479us 00:08:55.195 90.00000% : 13686.233us 00:08:55.195 95.00000% : 14317.905us 00:08:55.195 98.00000% : 17581.545us 00:08:55.195 99.00000% : 20529.349us 00:08:55.195 99.50000% : 26635.515us 00:08:55.195 99.90000% : 27583.023us 00:08:55.195 99.99000% : 28004.138us 00:08:55.195 99.99900% : 28004.138us 00:08:55.195 99.99990% : 28004.138us 00:08:55.195 99.99999% : 28004.138us 00:08:55.195 00:08:55.195 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:55.195 ================================================================================= 00:08:55.195 1.00000% : 7527.428us 00:08:55.195 10.00000% : 7843.264us 00:08:55.195 25.00000% : 8211.740us 00:08:55.195 50.00000% : 8843.412us 00:08:55.195 75.00000% : 13212.479us 00:08:55.195 90.00000% : 13791.512us 00:08:55.195 95.00000% : 14107.348us 00:08:55.195 98.00000% : 17686.824us 00:08:55.195 99.00000% : 19476.562us 00:08:55.195 99.50000% : 24424.662us 00:08:55.195 99.90000% : 25898.564us 00:08:55.195 99.99000% : 26214.400us 00:08:55.195 99.99900% : 26319.679us 00:08:55.195 99.99990% : 26319.679us 00:08:55.195 99.99999% : 26319.679us 00:08:55.195 00:08:55.195 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:55.195 ================================================================================= 00:08:55.195 1.00000% : 7474.789us 00:08:55.195 10.00000% : 7843.264us 00:08:55.195 25.00000% : 8211.740us 00:08:55.195 50.00000% : 8896.051us 00:08:55.195 75.00000% : 13212.479us 00:08:55.195 90.00000% : 13686.233us 00:08:55.195 95.00000% : 14002.069us 00:08:55.195 98.00000% : 17581.545us 00:08:55.195 99.00000% : 18423.775us 00:08:55.195 99.50000% : 23477.153us 00:08:55.195 99.90000% : 25161.613us 00:08:55.195 99.99000% : 25477.449us 00:08:55.195 99.99900% : 25582.728us 00:08:55.195 99.99990% : 25582.728us 00:08:55.195 99.99999% : 25582.728us 00:08:55.195 00:08:55.195 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:55.195 ============================================================================== 00:08:55.195 Range in us Cumulative IO count 00:08:55.195 6895.756 - 6948.395: 0.0081% ( 1) 00:08:55.195 7053.674 - 7106.313: 0.0162% ( 1) 00:08:55.195 7106.313 - 7158.953: 0.0324% ( 2) 00:08:55.195 7158.953 - 7211.592: 0.0486% ( 2) 00:08:55.195 7211.592 - 7264.231: 0.1214% ( 9) 00:08:55.195 7264.231 - 7316.871: 0.3400% ( 27) 00:08:55.195 7316.871 - 7369.510: 0.7853% ( 55) 00:08:55.195 7369.510 - 7422.149: 1.6030% ( 101) 00:08:55.195 7422.149 - 7474.789: 2.1859% ( 72) 00:08:55.195 7474.789 - 7527.428: 2.9874% ( 99) 00:08:55.195 7527.428 - 7580.067: 3.7727% ( 97) 00:08:55.195 7580.067 - 7632.707: 4.6308% ( 106) 00:08:55.195 7632.707 - 7685.346: 6.3310% ( 210) 00:08:55.195 7685.346 - 7737.986: 7.9825% ( 204) 00:08:55.195 7737.986 - 7790.625: 9.7231% ( 215) 00:08:55.195 7790.625 - 7843.264: 11.4071% ( 208) 00:08:55.195 7843.264 - 7895.904: 13.8763% ( 305) 00:08:55.195 7895.904 - 7948.543: 16.3455% ( 305) 00:08:55.195 7948.543 - 8001.182: 18.4666% ( 262) 00:08:55.195 8001.182 - 8053.822: 20.5392% ( 256) 00:08:55.195 8053.822 - 8106.461: 22.4903% ( 241) 00:08:55.195 8106.461 - 8159.100: 24.4738% ( 245) 00:08:55.195 8159.100 - 8211.740: 26.8054% ( 288) 00:08:55.196 8211.740 - 8264.379: 29.5418% ( 338) 00:08:55.196 8264.379 - 8317.018: 32.8368% ( 407) 00:08:55.196 8317.018 - 8369.658: 35.5327% ( 333) 00:08:55.196 8369.658 - 8422.297: 37.5000% ( 243) 00:08:55.196 8422.297 - 8474.937: 39.5806% ( 257) 00:08:55.196 8474.937 - 8527.576: 41.4427% ( 230) 00:08:55.196 8527.576 - 8580.215: 43.1428% ( 210) 00:08:55.196 8580.215 - 8632.855: 44.7620% ( 200) 00:08:55.196 8632.855 - 8685.494: 46.0978% ( 165) 00:08:55.196 8685.494 - 8738.133: 47.3041% ( 149) 00:08:55.196 8738.133 - 8790.773: 48.3889% ( 134) 00:08:55.196 8790.773 - 8843.412: 49.7085% ( 163) 00:08:55.196 8843.412 - 8896.051: 51.1739% ( 181) 00:08:55.196 8896.051 - 8948.691: 52.5421% ( 169) 00:08:55.196 8948.691 - 9001.330: 53.6917% ( 142) 00:08:55.196 9001.330 - 9053.969: 54.7604% ( 132) 00:08:55.196 9053.969 - 9106.609: 55.6428% ( 109) 00:08:55.196 9106.609 - 9159.248: 56.3310% ( 85) 00:08:55.196 9159.248 - 9211.888: 57.1244% ( 98) 00:08:55.196 9211.888 - 9264.527: 57.7882% ( 82) 00:08:55.196 9264.527 - 9317.166: 58.3873% ( 74) 00:08:55.196 9317.166 - 9369.806: 58.9702% ( 72) 00:08:55.196 9369.806 - 9422.445: 59.3588% ( 48) 00:08:55.196 9422.445 - 9475.084: 59.8688% ( 63) 00:08:55.196 9475.084 - 9527.724: 60.3951% ( 65) 00:08:55.196 9527.724 - 9580.363: 60.7675% ( 46) 00:08:55.196 9580.363 - 9633.002: 61.0185% ( 31) 00:08:55.196 9633.002 - 9685.642: 61.2370% ( 27) 00:08:55.196 9685.642 - 9738.281: 61.4799% ( 30) 00:08:55.196 9738.281 - 9790.920: 61.8523% ( 46) 00:08:55.196 9790.920 - 9843.560: 62.2733% ( 52) 00:08:55.196 9843.560 - 9896.199: 62.4514% ( 22) 00:08:55.196 9896.199 - 9948.839: 62.5405% ( 11) 00:08:55.196 9948.839 - 10001.478: 62.5810% ( 5) 00:08:55.196 10001.478 - 10054.117: 62.6295% ( 6) 00:08:55.196 10054.117 - 10106.757: 62.6538% ( 3) 00:08:55.196 10106.757 - 10159.396: 62.6700% ( 2) 00:08:55.196 10159.396 - 10212.035: 62.8319% ( 20) 00:08:55.196 10212.035 - 10264.675: 62.9372% ( 13) 00:08:55.196 10264.675 - 10317.314: 62.9615% ( 3) 00:08:55.196 10317.314 - 10369.953: 63.0019% ( 5) 00:08:55.196 10369.953 - 10422.593: 63.0181% ( 2) 00:08:55.196 10422.593 - 10475.232: 63.0586% ( 5) 00:08:55.196 10475.232 - 10527.871: 63.1153% ( 7) 00:08:55.196 10527.871 - 10580.511: 63.1477% ( 4) 00:08:55.196 10580.511 - 10633.150: 63.2205% ( 9) 00:08:55.196 10633.150 - 10685.790: 63.3015% ( 10) 00:08:55.196 10685.790 - 10738.429: 63.3420% ( 5) 00:08:55.196 10738.429 - 10791.068: 63.4067% ( 8) 00:08:55.196 10791.068 - 10843.708: 63.4877% ( 10) 00:08:55.196 10843.708 - 10896.347: 63.6739% ( 23) 00:08:55.196 10896.347 - 10948.986: 63.7630% ( 11) 00:08:55.196 10948.986 - 11001.626: 63.9087% ( 18) 00:08:55.196 11001.626 - 11054.265: 64.0787% ( 21) 00:08:55.196 11054.265 - 11106.904: 64.2892% ( 26) 00:08:55.196 11106.904 - 11159.544: 64.4754% ( 23) 00:08:55.196 11159.544 - 11212.183: 64.6211% ( 18) 00:08:55.196 11212.183 - 11264.822: 64.7264% ( 13) 00:08:55.196 11264.822 - 11317.462: 64.7992% ( 9) 00:08:55.196 11317.462 - 11370.101: 64.8883% ( 11) 00:08:55.196 11370.101 - 11422.741: 64.9207% ( 4) 00:08:55.196 11422.741 - 11475.380: 64.9935% ( 9) 00:08:55.196 11475.380 - 11528.019: 65.0178% ( 3) 00:08:55.196 11528.019 - 11580.659: 65.0502% ( 4) 00:08:55.196 11580.659 - 11633.298: 65.1554% ( 13) 00:08:55.196 11633.298 - 11685.937: 65.2607% ( 13) 00:08:55.196 11685.937 - 11738.577: 65.2688% ( 1) 00:08:55.196 11791.216 - 11843.855: 65.3012% ( 4) 00:08:55.196 11843.855 - 11896.495: 65.4550% ( 19) 00:08:55.196 11896.495 - 11949.134: 65.5440% ( 11) 00:08:55.196 11949.134 - 12001.773: 65.5683% ( 3) 00:08:55.196 12001.773 - 12054.413: 65.5845% ( 2) 00:08:55.196 12054.413 - 12107.052: 65.5926% ( 1) 00:08:55.196 12107.052 - 12159.692: 65.6007% ( 1) 00:08:55.196 12159.692 - 12212.331: 65.6169% ( 2) 00:08:55.196 12212.331 - 12264.970: 65.6250% ( 1) 00:08:55.196 12264.970 - 12317.610: 65.6979% ( 9) 00:08:55.196 12317.610 - 12370.249: 65.7869% ( 11) 00:08:55.196 12370.249 - 12422.888: 65.9003% ( 14) 00:08:55.196 12422.888 - 12475.528: 66.0136% ( 14) 00:08:55.196 12475.528 - 12528.167: 66.0622% ( 6) 00:08:55.196 12528.167 - 12580.806: 66.2970% ( 29) 00:08:55.196 12580.806 - 12633.446: 66.7908% ( 61) 00:08:55.196 12633.446 - 12686.085: 67.4304% ( 79) 00:08:55.196 12686.085 - 12738.724: 68.1914% ( 94) 00:08:55.196 12738.724 - 12791.364: 68.9767% ( 97) 00:08:55.196 12791.364 - 12844.003: 69.5353% ( 69) 00:08:55.196 12844.003 - 12896.643: 70.1992% ( 82) 00:08:55.196 12896.643 - 12949.282: 71.1221% ( 114) 00:08:55.196 12949.282 - 13001.921: 71.9560% ( 103) 00:08:55.196 13001.921 - 13054.561: 73.3080% ( 167) 00:08:55.196 13054.561 - 13107.200: 74.4171% ( 137) 00:08:55.196 13107.200 - 13159.839: 75.8420% ( 176) 00:08:55.196 13159.839 - 13212.479: 77.1616% ( 163) 00:08:55.196 13212.479 - 13265.118: 78.0845% ( 114) 00:08:55.196 13265.118 - 13317.757: 79.2584% ( 145) 00:08:55.196 13317.757 - 13370.397: 80.4566% ( 148) 00:08:55.196 13370.397 - 13423.036: 81.9058% ( 179) 00:08:55.196 13423.036 - 13475.676: 83.4602% ( 192) 00:08:55.196 13475.676 - 13580.954: 85.9942% ( 313) 00:08:55.196 13580.954 - 13686.233: 88.2205% ( 275) 00:08:55.196 13686.233 - 13791.512: 89.9854% ( 218) 00:08:55.196 13791.512 - 13896.790: 91.4346% ( 179) 00:08:55.196 13896.790 - 14002.069: 92.5437% ( 137) 00:08:55.196 14002.069 - 14107.348: 93.4909% ( 117) 00:08:55.196 14107.348 - 14212.627: 94.4220% ( 115) 00:08:55.196 14212.627 - 14317.905: 95.0130% ( 73) 00:08:55.196 14317.905 - 14423.184: 95.3368% ( 40) 00:08:55.196 14423.184 - 14528.463: 95.6849% ( 43) 00:08:55.196 14528.463 - 14633.741: 95.8144% ( 16) 00:08:55.196 14633.741 - 14739.020: 95.8873% ( 9) 00:08:55.196 14739.020 - 14844.299: 96.0330% ( 18) 00:08:55.196 14844.299 - 14949.578: 96.1221% ( 11) 00:08:55.196 14949.578 - 15054.856: 96.1869% ( 8) 00:08:55.196 15054.856 - 15160.135: 96.2435% ( 7) 00:08:55.196 15160.135 - 15265.414: 96.3245% ( 10) 00:08:55.196 15265.414 - 15370.692: 96.5107% ( 23) 00:08:55.196 15370.692 - 15475.971: 96.5350% ( 3) 00:08:55.196 15475.971 - 15581.250: 96.5916% ( 7) 00:08:55.196 15581.250 - 15686.529: 96.6726% ( 10) 00:08:55.196 15686.529 - 15791.807: 96.8264% ( 19) 00:08:55.196 15791.807 - 15897.086: 96.8507% ( 3) 00:08:55.196 15897.086 - 16002.365: 96.8993% ( 6) 00:08:55.196 16002.365 - 16107.643: 96.9560% ( 7) 00:08:55.196 16107.643 - 16212.922: 97.0612% ( 13) 00:08:55.196 16212.922 - 16318.201: 97.1584% ( 12) 00:08:55.196 16318.201 - 16423.480: 97.2393% ( 10) 00:08:55.196 16423.480 - 16528.758: 97.3041% ( 8) 00:08:55.196 16528.758 - 16634.037: 97.3446% ( 5) 00:08:55.196 16634.037 - 16739.316: 97.4012% ( 7) 00:08:55.196 16739.316 - 16844.594: 97.4417% ( 5) 00:08:55.196 16844.594 - 16949.873: 97.4903% ( 6) 00:08:55.196 16949.873 - 17055.152: 97.5470% ( 7) 00:08:55.196 17055.152 - 17160.431: 97.6036% ( 7) 00:08:55.196 17160.431 - 17265.709: 97.6522% ( 6) 00:08:55.196 17265.709 - 17370.988: 97.7008% ( 6) 00:08:55.196 17370.988 - 17476.267: 97.7494% ( 6) 00:08:55.196 17476.267 - 17581.545: 97.8060% ( 7) 00:08:55.196 17581.545 - 17686.824: 97.8465% ( 5) 00:08:55.196 17686.824 - 17792.103: 97.8951% ( 6) 00:08:55.196 17792.103 - 17897.382: 97.9437% ( 6) 00:08:55.196 17897.382 - 18002.660: 98.0003% ( 7) 00:08:55.196 18002.660 - 18107.939: 98.0408% ( 5) 00:08:55.196 18107.939 - 18213.218: 98.1460% ( 13) 00:08:55.196 18213.218 - 18318.496: 98.2270% ( 10) 00:08:55.196 18318.496 - 18423.775: 98.2837% ( 7) 00:08:55.196 18423.775 - 18529.054: 98.3403% ( 7) 00:08:55.196 18529.054 - 18634.333: 98.4537% ( 14) 00:08:55.196 18634.333 - 18739.611: 98.5347% ( 10) 00:08:55.196 18739.611 - 18844.890: 98.6318% ( 12) 00:08:55.196 18844.890 - 18950.169: 98.7047% ( 9) 00:08:55.196 18950.169 - 19055.447: 98.7451% ( 5) 00:08:55.196 19055.447 - 19160.726: 98.7937% ( 6) 00:08:55.196 19160.726 - 19266.005: 98.8342% ( 5) 00:08:55.196 19266.005 - 19371.284: 98.8585% ( 3) 00:08:55.196 19371.284 - 19476.562: 98.8909% ( 4) 00:08:55.196 19476.562 - 19581.841: 98.9233% ( 4) 00:08:55.196 19581.841 - 19687.120: 98.9556% ( 4) 00:08:55.196 19687.120 - 19792.398: 98.9637% ( 1) 00:08:55.196 22108.530 - 22213.809: 98.9718% ( 1) 00:08:55.196 22213.809 - 22319.088: 99.0204% ( 6) 00:08:55.196 22319.088 - 22424.366: 99.0366% ( 2) 00:08:55.196 22424.366 - 22529.645: 99.0690% ( 4) 00:08:55.196 22529.645 - 22634.924: 99.0933% ( 3) 00:08:55.196 22634.924 - 22740.202: 99.1256% ( 4) 00:08:55.196 22740.202 - 22845.481: 99.1580% ( 4) 00:08:55.196 22845.481 - 22950.760: 99.1823% ( 3) 00:08:55.196 22950.760 - 23056.039: 99.2147% ( 4) 00:08:55.196 23056.039 - 23161.317: 99.2390% ( 3) 00:08:55.196 23161.317 - 23266.596: 99.2633% ( 3) 00:08:55.196 23266.596 - 23371.875: 99.2957% ( 4) 00:08:55.196 23371.875 - 23477.153: 99.3280% ( 4) 00:08:55.196 23477.153 - 23582.432: 99.3604% ( 4) 00:08:55.196 23582.432 - 23687.711: 99.3847% ( 3) 00:08:55.196 23687.711 - 23792.990: 99.4171% ( 4) 00:08:55.196 23792.990 - 23898.268: 99.4495% ( 4) 00:08:55.196 23898.268 - 24003.547: 99.4738% ( 3) 00:08:55.196 24003.547 - 24108.826: 99.4819% ( 1) 00:08:55.196 30530.827 - 30741.385: 99.5062% ( 3) 00:08:55.196 31373.057 - 31583.614: 99.5709% ( 8) 00:08:55.196 31583.614 - 31794.172: 99.6195% ( 6) 00:08:55.196 31794.172 - 32004.729: 99.6681% ( 6) 00:08:55.196 32004.729 - 32215.287: 99.7085% ( 5) 00:08:55.196 32215.287 - 32425.844: 99.7490% ( 5) 00:08:55.196 32425.844 - 32636.402: 99.7814% ( 4) 00:08:55.196 32636.402 - 32846.959: 100.0000% ( 27) 00:08:55.196 00:08:55.196 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:55.196 ============================================================================== 00:08:55.196 Range in us Cumulative IO count 00:08:55.196 7158.953 - 7211.592: 0.0081% ( 1) 00:08:55.196 7264.231 - 7316.871: 0.0486% ( 5) 00:08:55.196 7316.871 - 7369.510: 0.1700% ( 15) 00:08:55.196 7369.510 - 7422.149: 0.3724% ( 25) 00:08:55.196 7422.149 - 7474.789: 1.1253% ( 93) 00:08:55.196 7474.789 - 7527.428: 2.1049% ( 121) 00:08:55.196 7527.428 - 7580.067: 2.7364% ( 78) 00:08:55.197 7580.067 - 7632.707: 4.1370% ( 173) 00:08:55.197 7632.707 - 7685.346: 5.4809% ( 166) 00:08:55.197 7685.346 - 7737.986: 6.7438% ( 156) 00:08:55.197 7737.986 - 7790.625: 9.1645% ( 299) 00:08:55.197 7790.625 - 7843.264: 10.8403% ( 207) 00:08:55.197 7843.264 - 7895.904: 12.2005% ( 168) 00:08:55.197 7895.904 - 7948.543: 14.1516% ( 241) 00:08:55.197 7948.543 - 8001.182: 17.2604% ( 384) 00:08:55.197 8001.182 - 8053.822: 20.2963% ( 375) 00:08:55.197 8053.822 - 8106.461: 23.3323% ( 375) 00:08:55.197 8106.461 - 8159.100: 25.7448% ( 298) 00:08:55.197 8159.100 - 8211.740: 27.9145% ( 268) 00:08:55.197 8211.740 - 8264.379: 30.7966% ( 356) 00:08:55.197 8264.379 - 8317.018: 33.4278% ( 325) 00:08:55.197 8317.018 - 8369.658: 36.3585% ( 362) 00:08:55.197 8369.658 - 8422.297: 39.2649% ( 359) 00:08:55.197 8422.297 - 8474.937: 40.9488% ( 208) 00:08:55.197 8474.937 - 8527.576: 42.1956% ( 154) 00:08:55.197 8527.576 - 8580.215: 43.6448% ( 179) 00:08:55.197 8580.215 - 8632.855: 45.0210% ( 170) 00:08:55.197 8632.855 - 8685.494: 45.9521% ( 115) 00:08:55.197 8685.494 - 8738.133: 46.9398% ( 122) 00:08:55.197 8738.133 - 8790.773: 48.1541% ( 150) 00:08:55.197 8790.773 - 8843.412: 49.4576% ( 161) 00:08:55.197 8843.412 - 8896.051: 50.8824% ( 176) 00:08:55.197 8896.051 - 8948.691: 52.3073% ( 176) 00:08:55.197 8948.691 - 9001.330: 53.7079% ( 173) 00:08:55.197 9001.330 - 9053.969: 54.8899% ( 146) 00:08:55.197 9053.969 - 9106.609: 55.9585% ( 132) 00:08:55.197 9106.609 - 9159.248: 57.3025% ( 166) 00:08:55.197 9159.248 - 9211.888: 58.1282% ( 102) 00:08:55.197 9211.888 - 9264.527: 59.1240% ( 123) 00:08:55.197 9264.527 - 9317.166: 59.8688% ( 92) 00:08:55.197 9317.166 - 9369.806: 60.4032% ( 66) 00:08:55.197 9369.806 - 9422.445: 60.7027% ( 37) 00:08:55.197 9422.445 - 9475.084: 61.0347% ( 41) 00:08:55.197 9475.084 - 9527.724: 61.2370% ( 25) 00:08:55.197 9527.724 - 9580.363: 61.3666% ( 16) 00:08:55.197 9580.363 - 9633.002: 61.4718% ( 13) 00:08:55.197 9633.002 - 9685.642: 61.5771% ( 13) 00:08:55.197 9685.642 - 9738.281: 61.7147% ( 17) 00:08:55.197 9738.281 - 9790.920: 61.8523% ( 17) 00:08:55.197 9790.920 - 9843.560: 62.0223% ( 21) 00:08:55.197 9843.560 - 9896.199: 62.1114% ( 11) 00:08:55.197 9896.199 - 9948.839: 62.2247% ( 14) 00:08:55.197 9948.839 - 10001.478: 62.3381% ( 14) 00:08:55.197 10001.478 - 10054.117: 62.4271% ( 11) 00:08:55.197 10054.117 - 10106.757: 62.5162% ( 11) 00:08:55.197 10106.757 - 10159.396: 62.5810% ( 8) 00:08:55.197 10159.396 - 10212.035: 62.6943% ( 14) 00:08:55.197 10212.035 - 10264.675: 62.8076% ( 14) 00:08:55.197 10264.675 - 10317.314: 62.8886% ( 10) 00:08:55.197 10317.314 - 10369.953: 62.9938% ( 13) 00:08:55.197 10369.953 - 10422.593: 63.0343% ( 5) 00:08:55.197 10422.593 - 10475.232: 63.0667% ( 4) 00:08:55.197 10475.232 - 10527.871: 63.0991% ( 4) 00:08:55.197 10527.871 - 10580.511: 63.1315% ( 4) 00:08:55.197 10580.511 - 10633.150: 63.2043% ( 9) 00:08:55.197 10633.150 - 10685.790: 63.3582% ( 19) 00:08:55.197 10685.790 - 10738.429: 63.4877% ( 16) 00:08:55.197 10738.429 - 10791.068: 63.6091% ( 15) 00:08:55.197 10791.068 - 10843.708: 63.6982% ( 11) 00:08:55.197 10843.708 - 10896.347: 63.7630% ( 8) 00:08:55.197 10896.347 - 10948.986: 63.8763% ( 14) 00:08:55.197 10948.986 - 11001.626: 63.9492% ( 9) 00:08:55.197 11001.626 - 11054.265: 64.0544% ( 13) 00:08:55.197 11054.265 - 11106.904: 64.0949% ( 5) 00:08:55.197 11106.904 - 11159.544: 64.1597% ( 8) 00:08:55.197 11159.544 - 11212.183: 64.2082% ( 6) 00:08:55.197 11212.183 - 11264.822: 64.4997% ( 36) 00:08:55.197 11264.822 - 11317.462: 64.5725% ( 9) 00:08:55.197 11317.462 - 11370.101: 64.6211% ( 6) 00:08:55.197 11370.101 - 11422.741: 64.7021% ( 10) 00:08:55.197 11422.741 - 11475.380: 64.7830% ( 10) 00:08:55.197 11475.380 - 11528.019: 64.8397% ( 7) 00:08:55.197 11528.019 - 11580.659: 64.8883% ( 6) 00:08:55.197 11580.659 - 11633.298: 64.9611% ( 9) 00:08:55.197 11633.298 - 11685.937: 65.1231% ( 20) 00:08:55.197 11685.937 - 11738.577: 65.1635% ( 5) 00:08:55.197 11738.577 - 11791.216: 65.1959% ( 4) 00:08:55.197 11791.216 - 11843.855: 65.2607% ( 8) 00:08:55.197 11843.855 - 11896.495: 65.3174% ( 7) 00:08:55.197 11896.495 - 11949.134: 65.3821% ( 8) 00:08:55.197 11949.134 - 12001.773: 65.5359% ( 19) 00:08:55.197 12001.773 - 12054.413: 65.6493% ( 14) 00:08:55.197 12054.413 - 12107.052: 65.6736% ( 3) 00:08:55.197 12107.052 - 12159.692: 65.6898% ( 2) 00:08:55.197 12159.692 - 12212.331: 65.7141% ( 3) 00:08:55.197 12212.331 - 12264.970: 65.7302% ( 2) 00:08:55.197 12264.970 - 12317.610: 65.7464% ( 2) 00:08:55.197 12317.610 - 12370.249: 65.7626% ( 2) 00:08:55.197 12370.249 - 12422.888: 65.7788% ( 2) 00:08:55.197 12422.888 - 12475.528: 65.8112% ( 4) 00:08:55.197 12475.528 - 12528.167: 65.8517% ( 5) 00:08:55.197 12528.167 - 12580.806: 65.9326% ( 10) 00:08:55.197 12580.806 - 12633.446: 66.0865% ( 19) 00:08:55.197 12633.446 - 12686.085: 66.5236% ( 54) 00:08:55.197 12686.085 - 12738.724: 66.8394% ( 39) 00:08:55.197 12738.724 - 12791.364: 67.4061% ( 70) 00:08:55.197 12791.364 - 12844.003: 68.2400% ( 103) 00:08:55.197 12844.003 - 12896.643: 68.9686% ( 90) 00:08:55.197 12896.643 - 12949.282: 69.8510% ( 109) 00:08:55.197 12949.282 - 13001.921: 70.8549% ( 124) 00:08:55.197 13001.921 - 13054.561: 72.0612% ( 149) 00:08:55.197 13054.561 - 13107.200: 73.2756% ( 150) 00:08:55.197 13107.200 - 13159.839: 74.5304% ( 155) 00:08:55.197 13159.839 - 13212.479: 75.9067% ( 170) 00:08:55.197 13212.479 - 13265.118: 77.3073% ( 173) 00:08:55.197 13265.118 - 13317.757: 79.2341% ( 238) 00:08:55.197 13317.757 - 13370.397: 80.7642% ( 189) 00:08:55.197 13370.397 - 13423.036: 82.1972% ( 177) 00:08:55.197 13423.036 - 13475.676: 83.7516% ( 192) 00:08:55.197 13475.676 - 13580.954: 86.7795% ( 374) 00:08:55.197 13580.954 - 13686.233: 89.7911% ( 372) 00:08:55.197 13686.233 - 13791.512: 92.0984% ( 285) 00:08:55.197 13791.512 - 13896.790: 93.5152% ( 175) 00:08:55.197 13896.790 - 14002.069: 94.5596% ( 129) 00:08:55.197 14002.069 - 14107.348: 95.0777% ( 64) 00:08:55.197 14107.348 - 14212.627: 95.5311% ( 56) 00:08:55.197 14212.627 - 14317.905: 95.7011% ( 21) 00:08:55.197 14317.905 - 14423.184: 95.7983% ( 12) 00:08:55.197 14423.184 - 14528.463: 95.8549% ( 7) 00:08:55.197 14633.741 - 14739.020: 95.8873% ( 4) 00:08:55.197 14739.020 - 14844.299: 95.9359% ( 6) 00:08:55.197 14844.299 - 14949.578: 95.9845% ( 6) 00:08:55.197 14949.578 - 15054.856: 96.0573% ( 9) 00:08:55.197 15054.856 - 15160.135: 96.1221% ( 8) 00:08:55.197 15160.135 - 15265.414: 96.1949% ( 9) 00:08:55.197 15265.414 - 15370.692: 96.2354% ( 5) 00:08:55.197 15370.692 - 15475.971: 96.2678% ( 4) 00:08:55.197 15475.971 - 15581.250: 96.2921% ( 3) 00:08:55.197 15581.250 - 15686.529: 96.3326% ( 5) 00:08:55.197 15686.529 - 15791.807: 96.3892% ( 7) 00:08:55.197 15791.807 - 15897.086: 96.4378% ( 6) 00:08:55.197 15897.086 - 16002.365: 96.5755% ( 17) 00:08:55.197 16002.365 - 16107.643: 96.7698% ( 24) 00:08:55.197 16107.643 - 16212.922: 97.0531% ( 35) 00:08:55.197 16212.922 - 16318.201: 97.2960% ( 30) 00:08:55.197 16318.201 - 16423.480: 97.3769% ( 10) 00:08:55.197 16423.480 - 16528.758: 97.4579% ( 10) 00:08:55.197 16528.758 - 16634.037: 97.5470% ( 11) 00:08:55.197 16634.037 - 16739.316: 97.6360% ( 11) 00:08:55.197 16739.316 - 16844.594: 97.7251% ( 11) 00:08:55.197 16844.594 - 16949.873: 97.7898% ( 8) 00:08:55.197 16949.873 - 17055.152: 97.8384% ( 6) 00:08:55.197 17055.152 - 17160.431: 97.8870% ( 6) 00:08:55.197 17160.431 - 17265.709: 97.9275% ( 5) 00:08:55.197 17897.382 - 18002.660: 97.9598% ( 4) 00:08:55.197 18002.660 - 18107.939: 98.0003% ( 5) 00:08:55.197 18107.939 - 18213.218: 98.0489% ( 6) 00:08:55.197 18213.218 - 18318.496: 98.1460% ( 12) 00:08:55.197 18318.496 - 18423.775: 98.2594% ( 14) 00:08:55.197 18423.775 - 18529.054: 98.3484% ( 11) 00:08:55.197 18529.054 - 18634.333: 98.4456% ( 12) 00:08:55.197 18634.333 - 18739.611: 98.5023% ( 7) 00:08:55.197 18739.611 - 18844.890: 98.5589% ( 7) 00:08:55.197 18844.890 - 18950.169: 98.6318% ( 9) 00:08:55.197 18950.169 - 19055.447: 98.7047% ( 9) 00:08:55.197 19055.447 - 19160.726: 98.7694% ( 8) 00:08:55.197 19160.726 - 19266.005: 98.8423% ( 9) 00:08:55.197 19266.005 - 19371.284: 98.9233% ( 10) 00:08:55.197 19371.284 - 19476.562: 98.9556% ( 4) 00:08:55.197 19476.562 - 19581.841: 98.9637% ( 1) 00:08:55.197 22529.645 - 22634.924: 99.0690% ( 13) 00:08:55.197 22634.924 - 22740.202: 99.2147% ( 18) 00:08:55.197 22740.202 - 22845.481: 99.2390% ( 3) 00:08:55.197 22845.481 - 22950.760: 99.2633% ( 3) 00:08:55.197 22950.760 - 23056.039: 99.2876% ( 3) 00:08:55.197 23056.039 - 23161.317: 99.3038% ( 2) 00:08:55.197 23161.317 - 23266.596: 99.3199% ( 2) 00:08:55.197 23266.596 - 23371.875: 99.3442% ( 3) 00:08:55.197 23371.875 - 23477.153: 99.3604% ( 2) 00:08:55.197 23477.153 - 23582.432: 99.3847% ( 3) 00:08:55.197 23582.432 - 23687.711: 99.4009% ( 2) 00:08:55.197 23687.711 - 23792.990: 99.4333% ( 4) 00:08:55.197 23792.990 - 23898.268: 99.4576% ( 3) 00:08:55.197 23898.268 - 24003.547: 99.4819% ( 3) 00:08:55.197 29478.040 - 29688.598: 99.5385% ( 7) 00:08:55.197 29688.598 - 29899.155: 99.5952% ( 7) 00:08:55.197 29899.155 - 30109.712: 99.6519% ( 7) 00:08:55.197 30109.712 - 30320.270: 99.7085% ( 7) 00:08:55.197 30320.270 - 30530.827: 99.7733% ( 8) 00:08:55.197 30530.827 - 30741.385: 99.8381% ( 8) 00:08:55.197 30741.385 - 30951.942: 99.8948% ( 7) 00:08:55.197 30951.942 - 31162.500: 99.9595% ( 8) 00:08:55.197 31162.500 - 31373.057: 100.0000% ( 5) 00:08:55.197 00:08:55.197 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:55.197 ============================================================================== 00:08:55.197 Range in us Cumulative IO count 00:08:55.197 7106.313 - 7158.953: 0.0162% ( 2) 00:08:55.197 7158.953 - 7211.592: 0.0324% ( 2) 00:08:55.197 7211.592 - 7264.231: 0.0648% ( 4) 00:08:55.197 7264.231 - 7316.871: 0.1376% ( 9) 00:08:55.197 7316.871 - 7369.510: 0.3805% ( 30) 00:08:55.198 7369.510 - 7422.149: 0.8015% ( 52) 00:08:55.198 7422.149 - 7474.789: 1.4168% ( 76) 00:08:55.198 7474.789 - 7527.428: 2.3559% ( 116) 00:08:55.198 7527.428 - 7580.067: 3.8212% ( 181) 00:08:55.198 7580.067 - 7632.707: 5.3433% ( 188) 00:08:55.198 7632.707 - 7685.346: 6.9786% ( 202) 00:08:55.198 7685.346 - 7737.986: 8.8731% ( 234) 00:08:55.198 7737.986 - 7790.625: 10.3465% ( 182) 00:08:55.198 7790.625 - 7843.264: 11.8847% ( 190) 00:08:55.198 7843.264 - 7895.904: 13.6901% ( 223) 00:08:55.198 7895.904 - 7948.543: 15.4307% ( 215) 00:08:55.198 7948.543 - 8001.182: 17.1875% ( 217) 00:08:55.198 8001.182 - 8053.822: 19.4948% ( 285) 00:08:55.198 8053.822 - 8106.461: 21.5107% ( 249) 00:08:55.198 8106.461 - 8159.100: 23.9556% ( 302) 00:08:55.198 8159.100 - 8211.740: 26.7487% ( 345) 00:08:55.198 8211.740 - 8264.379: 29.3718% ( 324) 00:08:55.198 8264.379 - 8317.018: 32.4401% ( 379) 00:08:55.198 8317.018 - 8369.658: 35.7675% ( 411) 00:08:55.198 8369.658 - 8422.297: 38.0019% ( 276) 00:08:55.198 8422.297 - 8474.937: 39.9773% ( 244) 00:08:55.198 8474.937 - 8527.576: 41.7989% ( 225) 00:08:55.198 8527.576 - 8580.215: 43.8229% ( 250) 00:08:55.198 8580.215 - 8632.855: 45.3206% ( 185) 00:08:55.198 8632.855 - 8685.494: 46.9074% ( 196) 00:08:55.198 8685.494 - 8738.133: 47.8951% ( 122) 00:08:55.198 8738.133 - 8790.773: 49.1256% ( 152) 00:08:55.198 8790.773 - 8843.412: 50.2591% ( 140) 00:08:55.198 8843.412 - 8896.051: 51.2468% ( 122) 00:08:55.198 8896.051 - 8948.691: 52.2830% ( 128) 00:08:55.198 8948.691 - 9001.330: 53.2950% ( 125) 00:08:55.198 9001.330 - 9053.969: 54.0560% ( 94) 00:08:55.198 9053.969 - 9106.609: 54.9951% ( 116) 00:08:55.198 9106.609 - 9159.248: 55.8857% ( 110) 00:08:55.198 9159.248 - 9211.888: 56.7438% ( 106) 00:08:55.198 9211.888 - 9264.527: 57.8206% ( 133) 00:08:55.198 9264.527 - 9317.166: 58.4926% ( 83) 00:08:55.198 9317.166 - 9369.806: 59.1078% ( 76) 00:08:55.198 9369.806 - 9422.445: 59.6260% ( 64) 00:08:55.198 9422.445 - 9475.084: 60.0793% ( 56) 00:08:55.198 9475.084 - 9527.724: 60.4760% ( 49) 00:08:55.198 9527.724 - 9580.363: 60.9132% ( 54) 00:08:55.198 9580.363 - 9633.002: 61.1804% ( 33) 00:08:55.198 9633.002 - 9685.642: 61.3909% ( 26) 00:08:55.198 9685.642 - 9738.281: 61.4961% ( 13) 00:08:55.198 9738.281 - 9790.920: 61.5771% ( 10) 00:08:55.198 9790.920 - 9843.560: 61.7066% ( 16) 00:08:55.198 9843.560 - 9896.199: 61.8199% ( 14) 00:08:55.198 9896.199 - 9948.839: 62.0547% ( 29) 00:08:55.198 9948.839 - 10001.478: 62.2005% ( 18) 00:08:55.198 10001.478 - 10054.117: 62.4595% ( 32) 00:08:55.198 10054.117 - 10106.757: 62.5891% ( 16) 00:08:55.198 10106.757 - 10159.396: 62.6943% ( 13) 00:08:55.198 10159.396 - 10212.035: 62.8076% ( 14) 00:08:55.198 10212.035 - 10264.675: 62.9938% ( 23) 00:08:55.198 10264.675 - 10317.314: 63.0748% ( 10) 00:08:55.198 10317.314 - 10369.953: 63.0991% ( 3) 00:08:55.198 10369.953 - 10422.593: 63.1234% ( 3) 00:08:55.198 10422.593 - 10475.232: 63.1477% ( 3) 00:08:55.198 10475.232 - 10527.871: 63.1881% ( 5) 00:08:55.198 10527.871 - 10580.511: 63.2124% ( 3) 00:08:55.198 10580.511 - 10633.150: 63.2367% ( 3) 00:08:55.198 10633.150 - 10685.790: 63.2691% ( 4) 00:08:55.198 10685.790 - 10738.429: 63.3096% ( 5) 00:08:55.198 10738.429 - 10791.068: 63.3582% ( 6) 00:08:55.198 10791.068 - 10843.708: 63.4229% ( 8) 00:08:55.198 10843.708 - 10896.347: 63.5120% ( 11) 00:08:55.198 10896.347 - 10948.986: 63.6010% ( 11) 00:08:55.198 10948.986 - 11001.626: 63.6658% ( 8) 00:08:55.198 11001.626 - 11054.265: 63.7144% ( 6) 00:08:55.198 11054.265 - 11106.904: 63.7468% ( 4) 00:08:55.198 11106.904 - 11159.544: 63.8520% ( 13) 00:08:55.198 11159.544 - 11212.183: 64.0139% ( 20) 00:08:55.198 11212.183 - 11264.822: 64.1920% ( 22) 00:08:55.198 11264.822 - 11317.462: 64.4268% ( 29) 00:08:55.198 11317.462 - 11370.101: 64.5644% ( 17) 00:08:55.198 11370.101 - 11422.741: 64.7102% ( 18) 00:08:55.198 11422.741 - 11475.380: 64.8802% ( 21) 00:08:55.198 11475.380 - 11528.019: 65.0907% ( 26) 00:08:55.198 11528.019 - 11580.659: 65.1392% ( 6) 00:08:55.198 11580.659 - 11633.298: 65.1635% ( 3) 00:08:55.198 11633.298 - 11685.937: 65.1797% ( 2) 00:08:55.198 11685.937 - 11738.577: 65.1959% ( 2) 00:08:55.198 11738.577 - 11791.216: 65.2121% ( 2) 00:08:55.198 11791.216 - 11843.855: 65.2283% ( 2) 00:08:55.198 11843.855 - 11896.495: 65.3012% ( 9) 00:08:55.198 11896.495 - 11949.134: 65.4226% ( 15) 00:08:55.198 11949.134 - 12001.773: 65.5521% ( 16) 00:08:55.198 12001.773 - 12054.413: 65.6979% ( 18) 00:08:55.198 12054.413 - 12107.052: 65.9084% ( 26) 00:08:55.198 12107.052 - 12159.692: 66.0541% ( 18) 00:08:55.198 12159.692 - 12212.331: 66.1188% ( 8) 00:08:55.198 12212.331 - 12264.970: 66.1674% ( 6) 00:08:55.198 12264.970 - 12317.610: 66.1998% ( 4) 00:08:55.198 12317.610 - 12370.249: 66.2079% ( 1) 00:08:55.198 12370.249 - 12422.888: 66.2241% ( 2) 00:08:55.198 12422.888 - 12475.528: 66.2403% ( 2) 00:08:55.198 12475.528 - 12528.167: 66.2727% ( 4) 00:08:55.198 12528.167 - 12580.806: 66.3293% ( 7) 00:08:55.198 12580.806 - 12633.446: 66.4265% ( 12) 00:08:55.198 12633.446 - 12686.085: 66.5884% ( 20) 00:08:55.198 12686.085 - 12738.724: 67.0337% ( 55) 00:08:55.198 12738.724 - 12791.364: 67.4870% ( 56) 00:08:55.198 12791.364 - 12844.003: 68.1347% ( 80) 00:08:55.198 12844.003 - 12896.643: 68.7338% ( 74) 00:08:55.198 12896.643 - 12949.282: 69.7134% ( 121) 00:08:55.198 12949.282 - 13001.921: 70.6120% ( 111) 00:08:55.198 13001.921 - 13054.561: 71.8021% ( 147) 00:08:55.198 13054.561 - 13107.200: 72.8951% ( 135) 00:08:55.198 13107.200 - 13159.839: 74.2471% ( 167) 00:08:55.198 13159.839 - 13212.479: 75.6396% ( 172) 00:08:55.198 13212.479 - 13265.118: 76.9187% ( 158) 00:08:55.198 13265.118 - 13317.757: 78.6917% ( 219) 00:08:55.198 13317.757 - 13370.397: 80.5133% ( 225) 00:08:55.198 13370.397 - 13423.036: 82.1648% ( 204) 00:08:55.198 13423.036 - 13475.676: 83.7435% ( 195) 00:08:55.198 13475.676 - 13580.954: 86.8847% ( 388) 00:08:55.198 13580.954 - 13686.233: 90.2283% ( 413) 00:08:55.198 13686.233 - 13791.512: 92.2199% ( 246) 00:08:55.198 13791.512 - 13896.790: 93.6124% ( 172) 00:08:55.198 13896.790 - 14002.069: 94.5353% ( 114) 00:08:55.198 14002.069 - 14107.348: 95.1992% ( 82) 00:08:55.198 14107.348 - 14212.627: 95.6444% ( 55) 00:08:55.198 14212.627 - 14317.905: 95.7821% ( 17) 00:08:55.198 14317.905 - 14423.184: 95.8306% ( 6) 00:08:55.198 14423.184 - 14528.463: 95.8549% ( 3) 00:08:55.198 14528.463 - 14633.741: 95.8630% ( 1) 00:08:55.198 14633.741 - 14739.020: 95.8954% ( 4) 00:08:55.198 14739.020 - 14844.299: 96.0654% ( 21) 00:08:55.198 14844.299 - 14949.578: 96.2597% ( 24) 00:08:55.198 14949.578 - 15054.856: 96.3650% ( 13) 00:08:55.198 15054.856 - 15160.135: 96.4216% ( 7) 00:08:55.198 15160.135 - 15265.414: 96.4864% ( 8) 00:08:55.198 15265.414 - 15370.692: 96.5431% ( 7) 00:08:55.198 15370.692 - 15475.971: 96.6078% ( 8) 00:08:55.198 15475.971 - 15581.250: 96.7212% ( 14) 00:08:55.198 15581.250 - 15686.529: 96.7940% ( 9) 00:08:55.198 15686.529 - 15791.807: 96.8993% ( 13) 00:08:55.198 15791.807 - 15897.086: 96.9883% ( 11) 00:08:55.198 15897.086 - 16002.365: 97.1098% ( 15) 00:08:55.198 16002.365 - 16107.643: 97.2069% ( 12) 00:08:55.198 16107.643 - 16212.922: 97.3041% ( 12) 00:08:55.198 16212.922 - 16318.201: 97.3365% ( 4) 00:08:55.198 16318.201 - 16423.480: 97.3769% ( 5) 00:08:55.198 16423.480 - 16528.758: 97.4012% ( 3) 00:08:55.198 16528.758 - 16634.037: 97.4093% ( 1) 00:08:55.198 16844.594 - 16949.873: 97.4579% ( 6) 00:08:55.198 16949.873 - 17055.152: 97.5631% ( 13) 00:08:55.198 17055.152 - 17160.431: 97.6684% ( 13) 00:08:55.198 17160.431 - 17265.709: 97.7655% ( 12) 00:08:55.198 17265.709 - 17370.988: 97.7979% ( 4) 00:08:55.198 17370.988 - 17476.267: 97.8870% ( 11) 00:08:55.198 17476.267 - 17581.545: 98.0165% ( 16) 00:08:55.198 17581.545 - 17686.824: 98.1137% ( 12) 00:08:55.198 17686.824 - 17792.103: 98.2513% ( 17) 00:08:55.198 17792.103 - 17897.382: 98.2999% ( 6) 00:08:55.198 17897.382 - 18002.660: 98.3484% ( 6) 00:08:55.198 18002.660 - 18107.939: 98.3970% ( 6) 00:08:55.198 18107.939 - 18213.218: 98.4537% ( 7) 00:08:55.198 18213.218 - 18318.496: 98.5185% ( 8) 00:08:55.198 18318.496 - 18423.775: 98.5751% ( 7) 00:08:55.198 18423.775 - 18529.054: 98.6318% ( 7) 00:08:55.198 18529.054 - 18634.333: 98.6966% ( 8) 00:08:55.198 18634.333 - 18739.611: 98.7532% ( 7) 00:08:55.198 18739.611 - 18844.890: 98.8180% ( 8) 00:08:55.198 18844.890 - 18950.169: 98.8828% ( 8) 00:08:55.198 18950.169 - 19055.447: 98.9071% ( 3) 00:08:55.198 19055.447 - 19160.726: 98.9233% ( 2) 00:08:55.198 19160.726 - 19266.005: 98.9475% ( 3) 00:08:55.198 19266.005 - 19371.284: 98.9637% ( 2) 00:08:55.198 22003.251 - 22108.530: 98.9718% ( 1) 00:08:55.198 22108.530 - 22213.809: 99.0528% ( 10) 00:08:55.198 22213.809 - 22319.088: 99.0933% ( 5) 00:08:55.198 22319.088 - 22424.366: 99.1095% ( 2) 00:08:55.198 22424.366 - 22529.645: 99.1176% ( 1) 00:08:55.198 22740.202 - 22845.481: 99.1499% ( 4) 00:08:55.198 22845.481 - 22950.760: 99.1985% ( 6) 00:08:55.198 22950.760 - 23056.039: 99.2876% ( 11) 00:08:55.198 23056.039 - 23161.317: 99.3604% ( 9) 00:08:55.198 23161.317 - 23266.596: 99.4333% ( 9) 00:08:55.198 23266.596 - 23371.875: 99.4576% ( 3) 00:08:55.198 23371.875 - 23477.153: 99.4819% ( 3) 00:08:55.198 27793.581 - 28004.138: 99.5466% ( 8) 00:08:55.198 28004.138 - 28214.696: 99.8138% ( 33) 00:08:55.198 28214.696 - 28425.253: 99.8300% ( 2) 00:08:55.198 28425.253 - 28635.810: 99.8543% ( 3) 00:08:55.198 28635.810 - 28846.368: 99.8867% ( 4) 00:08:55.198 28846.368 - 29056.925: 99.9271% ( 5) 00:08:55.198 29056.925 - 29267.483: 99.9757% ( 6) 00:08:55.198 29267.483 - 29478.040: 100.0000% ( 3) 00:08:55.198 00:08:55.198 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:55.198 ============================================================================== 00:08:55.198 Range in us Cumulative IO count 00:08:55.198 7053.674 - 7106.313: 0.0162% ( 2) 00:08:55.198 7106.313 - 7158.953: 0.0243% ( 1) 00:08:55.198 7158.953 - 7211.592: 0.0486% ( 3) 00:08:55.199 7211.592 - 7264.231: 0.0972% ( 6) 00:08:55.199 7264.231 - 7316.871: 0.2591% ( 20) 00:08:55.199 7316.871 - 7369.510: 0.5019% ( 30) 00:08:55.199 7369.510 - 7422.149: 0.9553% ( 56) 00:08:55.199 7422.149 - 7474.789: 1.6516% ( 86) 00:08:55.199 7474.789 - 7527.428: 2.3883% ( 91) 00:08:55.199 7527.428 - 7580.067: 3.2222% ( 103) 00:08:55.199 7580.067 - 7632.707: 4.4932% ( 157) 00:08:55.199 7632.707 - 7685.346: 5.5457% ( 130) 00:08:55.199 7685.346 - 7737.986: 7.2053% ( 205) 00:08:55.199 7737.986 - 7790.625: 8.8083% ( 198) 00:08:55.199 7790.625 - 7843.264: 10.9699% ( 267) 00:08:55.199 7843.264 - 7895.904: 13.4796% ( 310) 00:08:55.199 7895.904 - 7948.543: 15.7626% ( 282) 00:08:55.199 7948.543 - 8001.182: 17.9728% ( 273) 00:08:55.199 8001.182 - 8053.822: 20.2477% ( 281) 00:08:55.199 8053.822 - 8106.461: 22.3203% ( 256) 00:08:55.199 8106.461 - 8159.100: 24.4495% ( 263) 00:08:55.199 8159.100 - 8211.740: 26.6192% ( 268) 00:08:55.199 8211.740 - 8264.379: 28.7484% ( 263) 00:08:55.199 8264.379 - 8317.018: 31.2176% ( 305) 00:08:55.199 8317.018 - 8369.658: 33.8407% ( 324) 00:08:55.199 8369.658 - 8422.297: 36.4233% ( 319) 00:08:55.199 8422.297 - 8474.937: 38.9653% ( 314) 00:08:55.199 8474.937 - 8527.576: 41.2079% ( 277) 00:08:55.199 8527.576 - 8580.215: 43.4505% ( 277) 00:08:55.199 8580.215 - 8632.855: 45.2153% ( 218) 00:08:55.199 8632.855 - 8685.494: 46.8345% ( 200) 00:08:55.199 8685.494 - 8738.133: 47.9356% ( 136) 00:08:55.199 8738.133 - 8790.773: 49.3280% ( 172) 00:08:55.199 8790.773 - 8843.412: 50.5910% ( 156) 00:08:55.199 8843.412 - 8896.051: 51.6677% ( 133) 00:08:55.199 8896.051 - 8948.691: 52.6878% ( 126) 00:08:55.199 8948.691 - 9001.330: 53.8617% ( 145) 00:08:55.199 9001.330 - 9053.969: 55.1166% ( 155) 00:08:55.199 9053.969 - 9106.609: 55.9019% ( 97) 00:08:55.199 9106.609 - 9159.248: 56.7277% ( 102) 00:08:55.199 9159.248 - 9211.888: 57.3753% ( 80) 00:08:55.199 9211.888 - 9264.527: 57.9258% ( 68) 00:08:55.199 9264.527 - 9317.166: 58.3549% ( 53) 00:08:55.199 9317.166 - 9369.806: 58.8164% ( 57) 00:08:55.199 9369.806 - 9422.445: 59.2617% ( 55) 00:08:55.199 9422.445 - 9475.084: 59.6098% ( 43) 00:08:55.199 9475.084 - 9527.724: 59.8446% ( 29) 00:08:55.199 9527.724 - 9580.363: 60.0227% ( 22) 00:08:55.199 9580.363 - 9633.002: 60.3222% ( 37) 00:08:55.199 9633.002 - 9685.642: 60.6622% ( 42) 00:08:55.199 9685.642 - 9738.281: 61.1723% ( 63) 00:08:55.199 9738.281 - 9790.920: 61.6580% ( 60) 00:08:55.199 9790.920 - 9843.560: 62.1114% ( 56) 00:08:55.199 9843.560 - 9896.199: 62.3948% ( 35) 00:08:55.199 9896.199 - 9948.839: 62.6052% ( 26) 00:08:55.199 9948.839 - 10001.478: 62.7429% ( 17) 00:08:55.199 10001.478 - 10054.117: 62.9615% ( 27) 00:08:55.199 10054.117 - 10106.757: 63.0505% ( 11) 00:08:55.199 10106.757 - 10159.396: 63.0829% ( 4) 00:08:55.199 10159.396 - 10212.035: 63.1153% ( 4) 00:08:55.199 10212.035 - 10264.675: 63.1315% ( 2) 00:08:55.199 10264.675 - 10317.314: 63.1477% ( 2) 00:08:55.199 10317.314 - 10369.953: 63.1639% ( 2) 00:08:55.199 10369.953 - 10422.593: 63.1881% ( 3) 00:08:55.199 10422.593 - 10475.232: 63.2043% ( 2) 00:08:55.199 10475.232 - 10527.871: 63.2205% ( 2) 00:08:55.199 10580.511 - 10633.150: 63.2529% ( 4) 00:08:55.199 10633.150 - 10685.790: 63.3258% ( 9) 00:08:55.199 10685.790 - 10738.429: 63.3986% ( 9) 00:08:55.199 10738.429 - 10791.068: 63.5929% ( 24) 00:08:55.199 10791.068 - 10843.708: 63.6415% ( 6) 00:08:55.199 10843.708 - 10896.347: 63.7063% ( 8) 00:08:55.199 10896.347 - 10948.986: 63.7549% ( 6) 00:08:55.199 10948.986 - 11001.626: 63.8115% ( 7) 00:08:55.199 11001.626 - 11054.265: 63.8601% ( 6) 00:08:55.199 11054.265 - 11106.904: 63.9087% ( 6) 00:08:55.199 11106.904 - 11159.544: 64.1030% ( 24) 00:08:55.199 11159.544 - 11212.183: 64.1677% ( 8) 00:08:55.199 11212.183 - 11264.822: 64.2730% ( 13) 00:08:55.199 11264.822 - 11317.462: 64.3782% ( 13) 00:08:55.199 11317.462 - 11370.101: 64.5159% ( 17) 00:08:55.199 11370.101 - 11422.741: 64.7021% ( 23) 00:08:55.199 11422.741 - 11475.380: 64.8154% ( 14) 00:08:55.199 11475.380 - 11528.019: 64.9530% ( 17) 00:08:55.199 11528.019 - 11580.659: 65.0826% ( 16) 00:08:55.199 11580.659 - 11633.298: 65.1635% ( 10) 00:08:55.199 11633.298 - 11685.937: 65.2526% ( 11) 00:08:55.199 11685.937 - 11738.577: 65.3578% ( 13) 00:08:55.199 11738.577 - 11791.216: 65.4874% ( 16) 00:08:55.199 11791.216 - 11843.855: 65.6088% ( 15) 00:08:55.199 11843.855 - 11896.495: 65.7545% ( 18) 00:08:55.199 11896.495 - 11949.134: 65.9245% ( 21) 00:08:55.199 11949.134 - 12001.773: 66.0784% ( 19) 00:08:55.199 12001.773 - 12054.413: 66.1188% ( 5) 00:08:55.199 12054.413 - 12107.052: 66.1755% ( 7) 00:08:55.199 12107.052 - 12159.692: 66.2322% ( 7) 00:08:55.199 12159.692 - 12212.331: 66.2646% ( 4) 00:08:55.199 12212.331 - 12264.970: 66.2808% ( 2) 00:08:55.199 12264.970 - 12317.610: 66.3131% ( 4) 00:08:55.199 12317.610 - 12370.249: 66.3212% ( 1) 00:08:55.199 12370.249 - 12422.888: 66.3374% ( 2) 00:08:55.199 12475.528 - 12528.167: 66.3617% ( 3) 00:08:55.199 12528.167 - 12580.806: 66.3779% ( 2) 00:08:55.199 12580.806 - 12633.446: 66.4913% ( 14) 00:08:55.199 12633.446 - 12686.085: 66.7098% ( 27) 00:08:55.199 12686.085 - 12738.724: 67.1551% ( 55) 00:08:55.199 12738.724 - 12791.364: 67.7299% ( 71) 00:08:55.199 12791.364 - 12844.003: 68.4828% ( 93) 00:08:55.199 12844.003 - 12896.643: 69.6324% ( 142) 00:08:55.199 12896.643 - 12949.282: 70.6687% ( 128) 00:08:55.199 12949.282 - 13001.921: 71.7536% ( 134) 00:08:55.199 13001.921 - 13054.561: 73.0408% ( 159) 00:08:55.199 13054.561 - 13107.200: 73.8585% ( 101) 00:08:55.199 13107.200 - 13159.839: 74.8219% ( 119) 00:08:55.199 13159.839 - 13212.479: 75.8501% ( 127) 00:08:55.199 13212.479 - 13265.118: 76.8701% ( 126) 00:08:55.199 13265.118 - 13317.757: 78.2141% ( 166) 00:08:55.199 13317.757 - 13370.397: 79.9870% ( 219) 00:08:55.199 13370.397 - 13423.036: 81.6548% ( 206) 00:08:55.199 13423.036 - 13475.676: 83.3711% ( 212) 00:08:55.199 13475.676 - 13580.954: 87.3462% ( 491) 00:08:55.199 13580.954 - 13686.233: 90.0745% ( 337) 00:08:55.199 13686.233 - 13791.512: 92.2199% ( 265) 00:08:55.199 13791.512 - 13896.790: 93.7338% ( 187) 00:08:55.199 13896.790 - 14002.069: 94.3734% ( 79) 00:08:55.199 14002.069 - 14107.348: 94.6810% ( 38) 00:08:55.199 14107.348 - 14212.627: 94.9725% ( 36) 00:08:55.199 14212.627 - 14317.905: 95.1587% ( 23) 00:08:55.199 14317.905 - 14423.184: 95.4420% ( 35) 00:08:55.199 14423.184 - 14528.463: 95.9035% ( 57) 00:08:55.199 14528.463 - 14633.741: 96.1788% ( 34) 00:08:55.199 14633.741 - 14739.020: 96.2840% ( 13) 00:08:55.199 14739.020 - 14844.299: 96.3892% ( 13) 00:08:55.199 14844.299 - 14949.578: 96.4540% ( 8) 00:08:55.199 14949.578 - 15054.856: 96.5026% ( 6) 00:08:55.199 15054.856 - 15160.135: 96.5674% ( 8) 00:08:55.199 15160.135 - 15265.414: 96.6159% ( 6) 00:08:55.199 15265.414 - 15370.692: 96.6726% ( 7) 00:08:55.199 15370.692 - 15475.971: 96.7374% ( 8) 00:08:55.199 15475.971 - 15581.250: 96.8021% ( 8) 00:08:55.199 15581.250 - 15686.529: 96.8588% ( 7) 00:08:55.199 15686.529 - 15791.807: 96.8912% ( 4) 00:08:55.199 16212.922 - 16318.201: 96.9398% ( 6) 00:08:55.199 16318.201 - 16423.480: 96.9641% ( 3) 00:08:55.199 16423.480 - 16528.758: 97.0126% ( 6) 00:08:55.199 16528.758 - 16634.037: 97.0450% ( 4) 00:08:55.199 16634.037 - 16739.316: 97.0855% ( 5) 00:08:55.199 16739.316 - 16844.594: 97.1907% ( 13) 00:08:55.199 16844.594 - 16949.873: 97.2636% ( 9) 00:08:55.199 16949.873 - 17055.152: 97.3122% ( 6) 00:08:55.199 17055.152 - 17160.431: 97.3527% ( 5) 00:08:55.199 17160.431 - 17265.709: 97.4984% ( 18) 00:08:55.199 17265.709 - 17370.988: 97.7170% ( 27) 00:08:55.199 17370.988 - 17476.267: 97.9356% ( 27) 00:08:55.199 17476.267 - 17581.545: 98.1946% ( 32) 00:08:55.199 17581.545 - 17686.824: 98.2999% ( 13) 00:08:55.199 17686.824 - 17792.103: 98.3646% ( 8) 00:08:55.199 17792.103 - 17897.382: 98.4375% ( 9) 00:08:55.199 17897.382 - 18002.660: 98.5266% ( 11) 00:08:55.199 18002.660 - 18107.939: 98.6075% ( 10) 00:08:55.199 18107.939 - 18213.218: 98.6966% ( 11) 00:08:55.199 18213.218 - 18318.496: 98.7613% ( 8) 00:08:55.199 18318.496 - 18423.775: 98.8180% ( 7) 00:08:55.199 18423.775 - 18529.054: 98.8423% ( 3) 00:08:55.199 18529.054 - 18634.333: 98.8585% ( 2) 00:08:55.200 18634.333 - 18739.611: 98.8909% ( 4) 00:08:55.200 18739.611 - 18844.890: 98.9152% ( 3) 00:08:55.200 18844.890 - 18950.169: 98.9394% ( 3) 00:08:55.200 18950.169 - 19055.447: 98.9637% ( 3) 00:08:55.200 20424.071 - 20529.349: 99.0204% ( 7) 00:08:55.200 20529.349 - 20634.628: 99.0366% ( 2) 00:08:55.200 20634.628 - 20739.907: 99.0609% ( 3) 00:08:55.200 20739.907 - 20845.186: 99.0690% ( 1) 00:08:55.200 21161.022 - 21266.300: 99.1337% ( 8) 00:08:55.200 21266.300 - 21371.579: 99.1904% ( 7) 00:08:55.200 21371.579 - 21476.858: 99.2714% ( 10) 00:08:55.200 21476.858 - 21582.137: 99.3361% ( 8) 00:08:55.200 21582.137 - 21687.415: 99.4252% ( 11) 00:08:55.200 21687.415 - 21792.694: 99.4657% ( 5) 00:08:55.200 21792.694 - 21897.973: 99.4819% ( 2) 00:08:55.200 26530.236 - 26635.515: 99.6762% ( 24) 00:08:55.200 26635.515 - 26740.794: 99.7976% ( 15) 00:08:55.200 26846.072 - 26951.351: 99.8057% ( 1) 00:08:55.200 26951.351 - 27161.908: 99.8462% ( 5) 00:08:55.200 27161.908 - 27372.466: 99.8867% ( 5) 00:08:55.200 27372.466 - 27583.023: 99.9190% ( 4) 00:08:55.200 27583.023 - 27793.581: 99.9595% ( 5) 00:08:55.200 27793.581 - 28004.138: 100.0000% ( 5) 00:08:55.200 00:08:55.200 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:55.200 ============================================================================== 00:08:55.200 Range in us Cumulative IO count 00:08:55.200 7158.953 - 7211.592: 0.0162% ( 2) 00:08:55.200 7211.592 - 7264.231: 0.1052% ( 11) 00:08:55.200 7264.231 - 7316.871: 0.1943% ( 11) 00:08:55.200 7316.871 - 7369.510: 0.2834% ( 11) 00:08:55.200 7369.510 - 7422.149: 0.5586% ( 34) 00:08:55.200 7422.149 - 7474.789: 0.9553% ( 49) 00:08:55.200 7474.789 - 7527.428: 1.6597% ( 87) 00:08:55.200 7527.428 - 7580.067: 2.8093% ( 142) 00:08:55.200 7580.067 - 7632.707: 4.1208% ( 162) 00:08:55.200 7632.707 - 7685.346: 5.7562% ( 202) 00:08:55.200 7685.346 - 7737.986: 7.6749% ( 237) 00:08:55.200 7737.986 - 7790.625: 9.6503% ( 244) 00:08:55.200 7790.625 - 7843.264: 11.6499% ( 247) 00:08:55.200 7843.264 - 7895.904: 13.7953% ( 265) 00:08:55.200 7895.904 - 7948.543: 15.9650% ( 268) 00:08:55.200 7948.543 - 8001.182: 18.1104% ( 265) 00:08:55.200 8001.182 - 8053.822: 20.2477% ( 264) 00:08:55.200 8053.822 - 8106.461: 22.5793% ( 288) 00:08:55.200 8106.461 - 8159.100: 24.6762% ( 259) 00:08:55.200 8159.100 - 8211.740: 27.1373% ( 304) 00:08:55.200 8211.740 - 8264.379: 29.5580% ( 299) 00:08:55.200 8264.379 - 8317.018: 32.0677% ( 310) 00:08:55.200 8317.018 - 8369.658: 34.9498% ( 356) 00:08:55.200 8369.658 - 8422.297: 37.0952% ( 265) 00:08:55.200 8422.297 - 8474.937: 39.1839% ( 258) 00:08:55.200 8474.937 - 8527.576: 41.2160% ( 251) 00:08:55.200 8527.576 - 8580.215: 43.2723% ( 254) 00:08:55.200 8580.215 - 8632.855: 44.8348% ( 193) 00:08:55.200 8632.855 - 8685.494: 46.1869% ( 167) 00:08:55.200 8685.494 - 8738.133: 47.4255% ( 153) 00:08:55.200 8738.133 - 8790.773: 48.5670% ( 141) 00:08:55.200 8790.773 - 8843.412: 50.0486% ( 183) 00:08:55.200 8843.412 - 8896.051: 51.2306% ( 146) 00:08:55.200 8896.051 - 8948.691: 52.7526% ( 188) 00:08:55.200 8948.691 - 9001.330: 53.9589% ( 149) 00:08:55.200 9001.330 - 9053.969: 54.8818% ( 114) 00:08:55.200 9053.969 - 9106.609: 55.7157% ( 103) 00:08:55.200 9106.609 - 9159.248: 56.3148% ( 74) 00:08:55.200 9159.248 - 9211.888: 56.9948% ( 84) 00:08:55.200 9211.888 - 9264.527: 57.5210% ( 65) 00:08:55.200 9264.527 - 9317.166: 58.0149% ( 61) 00:08:55.200 9317.166 - 9369.806: 58.7435% ( 90) 00:08:55.200 9369.806 - 9422.445: 59.2050% ( 57) 00:08:55.200 9422.445 - 9475.084: 59.5936% ( 48) 00:08:55.200 9475.084 - 9527.724: 59.8931% ( 37) 00:08:55.200 9527.724 - 9580.363: 60.2251% ( 41) 00:08:55.200 9580.363 - 9633.002: 60.5408% ( 39) 00:08:55.200 9633.002 - 9685.642: 60.9780% ( 54) 00:08:55.200 9685.642 - 9738.281: 61.3342% ( 44) 00:08:55.200 9738.281 - 9790.920: 61.7066% ( 46) 00:08:55.200 9790.920 - 9843.560: 62.1600% ( 56) 00:08:55.200 9843.560 - 9896.199: 62.6052% ( 55) 00:08:55.200 9896.199 - 9948.839: 62.8157% ( 26) 00:08:55.200 9948.839 - 10001.478: 62.9615% ( 18) 00:08:55.200 10001.478 - 10054.117: 63.0505% ( 11) 00:08:55.200 10054.117 - 10106.757: 63.1153% ( 8) 00:08:55.200 10106.757 - 10159.396: 63.1396% ( 3) 00:08:55.200 10159.396 - 10212.035: 63.1558% ( 2) 00:08:55.200 10212.035 - 10264.675: 63.1720% ( 2) 00:08:55.200 10264.675 - 10317.314: 63.1881% ( 2) 00:08:55.200 10317.314 - 10369.953: 63.2043% ( 2) 00:08:55.200 10369.953 - 10422.593: 63.2124% ( 1) 00:08:55.200 10527.871 - 10580.511: 63.2205% ( 1) 00:08:55.200 10580.511 - 10633.150: 63.2286% ( 1) 00:08:55.200 10685.790 - 10738.429: 63.2772% ( 6) 00:08:55.200 10738.429 - 10791.068: 63.3663% ( 11) 00:08:55.200 10791.068 - 10843.708: 63.4796% ( 14) 00:08:55.200 10843.708 - 10896.347: 63.5929% ( 14) 00:08:55.200 10896.347 - 10948.986: 63.7630% ( 21) 00:08:55.200 10948.986 - 11001.626: 63.9734% ( 26) 00:08:55.200 11001.626 - 11054.265: 64.0625% ( 11) 00:08:55.200 11054.265 - 11106.904: 64.1111% ( 6) 00:08:55.200 11106.904 - 11159.544: 64.1273% ( 2) 00:08:55.200 11159.544 - 11212.183: 64.1839% ( 7) 00:08:55.200 11212.183 - 11264.822: 64.2892% ( 13) 00:08:55.200 11264.822 - 11317.462: 64.3944% ( 13) 00:08:55.200 11317.462 - 11370.101: 64.5806% ( 23) 00:08:55.200 11370.101 - 11422.741: 64.6778% ( 12) 00:08:55.200 11422.741 - 11475.380: 64.7426% ( 8) 00:08:55.200 11475.380 - 11528.019: 64.8154% ( 9) 00:08:55.200 11528.019 - 11580.659: 64.8883% ( 9) 00:08:55.200 11580.659 - 11633.298: 65.0583% ( 21) 00:08:55.200 11633.298 - 11685.937: 65.2040% ( 18) 00:08:55.200 11685.937 - 11738.577: 65.2445% ( 5) 00:08:55.200 11738.577 - 11791.216: 65.2931% ( 6) 00:08:55.200 11791.216 - 11843.855: 65.3335% ( 5) 00:08:55.200 11843.855 - 11896.495: 65.3659% ( 4) 00:08:55.200 11896.495 - 11949.134: 65.4307% ( 8) 00:08:55.200 11949.134 - 12001.773: 65.4955% ( 8) 00:08:55.200 12001.773 - 12054.413: 65.6169% ( 15) 00:08:55.200 12054.413 - 12107.052: 65.6817% ( 8) 00:08:55.200 12107.052 - 12159.692: 65.7383% ( 7) 00:08:55.200 12159.692 - 12212.331: 65.7707% ( 4) 00:08:55.200 12212.331 - 12264.970: 65.8031% ( 4) 00:08:55.200 12264.970 - 12317.610: 65.8436% ( 5) 00:08:55.200 12317.610 - 12370.249: 65.8760% ( 4) 00:08:55.200 12370.249 - 12422.888: 65.9165% ( 5) 00:08:55.200 12422.888 - 12475.528: 66.0865% ( 21) 00:08:55.200 12475.528 - 12528.167: 66.2322% ( 18) 00:08:55.200 12528.167 - 12580.806: 66.4346% ( 25) 00:08:55.200 12580.806 - 12633.446: 66.6775% ( 30) 00:08:55.200 12633.446 - 12686.085: 67.0499% ( 46) 00:08:55.200 12686.085 - 12738.724: 67.4385% ( 48) 00:08:55.200 12738.724 - 12791.364: 67.9161% ( 59) 00:08:55.200 12791.364 - 12844.003: 68.4100% ( 61) 00:08:55.200 12844.003 - 12896.643: 69.1305% ( 89) 00:08:55.200 12896.643 - 12949.282: 69.8834% ( 93) 00:08:55.200 12949.282 - 13001.921: 70.8387% ( 118) 00:08:55.200 13001.921 - 13054.561: 72.1584% ( 163) 00:08:55.200 13054.561 - 13107.200: 73.1865% ( 127) 00:08:55.200 13107.200 - 13159.839: 74.5142% ( 164) 00:08:55.200 13159.839 - 13212.479: 76.4573% ( 240) 00:08:55.200 13212.479 - 13265.118: 78.2383% ( 220) 00:08:55.200 13265.118 - 13317.757: 79.5175% ( 158) 00:08:55.200 13317.757 - 13370.397: 81.2743% ( 217) 00:08:55.200 13370.397 - 13423.036: 82.7801% ( 186) 00:08:55.200 13423.036 - 13475.676: 84.3102% ( 189) 00:08:55.200 13475.676 - 13580.954: 87.2166% ( 359) 00:08:55.200 13580.954 - 13686.233: 89.4754% ( 279) 00:08:55.200 13686.233 - 13791.512: 91.4670% ( 246) 00:08:55.200 13791.512 - 13896.790: 93.1914% ( 213) 00:08:55.200 13896.790 - 14002.069: 94.2196% ( 127) 00:08:55.200 14002.069 - 14107.348: 95.0858% ( 107) 00:08:55.200 14107.348 - 14212.627: 95.4582% ( 46) 00:08:55.200 14212.627 - 14317.905: 95.7821% ( 40) 00:08:55.200 14317.905 - 14423.184: 95.9764% ( 24) 00:08:55.200 14423.184 - 14528.463: 96.0573% ( 10) 00:08:55.200 14528.463 - 14633.741: 96.2273% ( 21) 00:08:55.200 14633.741 - 14739.020: 96.3973% ( 21) 00:08:55.200 14739.020 - 14844.299: 96.5188% ( 15) 00:08:55.200 14844.299 - 14949.578: 96.6402% ( 15) 00:08:55.200 14949.578 - 15054.856: 96.7374% ( 12) 00:08:55.200 15054.856 - 15160.135: 96.8021% ( 8) 00:08:55.200 15160.135 - 15265.414: 96.8345% ( 4) 00:08:55.200 15265.414 - 15370.692: 96.8669% ( 4) 00:08:55.200 15370.692 - 15475.971: 96.8912% ( 3) 00:08:55.200 16423.480 - 16528.758: 96.8993% ( 1) 00:08:55.200 16528.758 - 16634.037: 96.9074% ( 1) 00:08:55.200 16634.037 - 16739.316: 96.9236% ( 2) 00:08:55.200 16739.316 - 16844.594: 97.1179% ( 24) 00:08:55.200 16844.594 - 16949.873: 97.3203% ( 25) 00:08:55.200 16949.873 - 17055.152: 97.4012% ( 10) 00:08:55.200 17055.152 - 17160.431: 97.5065% ( 13) 00:08:55.200 17160.431 - 17265.709: 97.6846% ( 22) 00:08:55.200 17265.709 - 17370.988: 97.7655% ( 10) 00:08:55.200 17370.988 - 17476.267: 97.8546% ( 11) 00:08:55.200 17476.267 - 17581.545: 97.9517% ( 12) 00:08:55.200 17581.545 - 17686.824: 98.0489% ( 12) 00:08:55.200 17686.824 - 17792.103: 98.1380% ( 11) 00:08:55.200 17792.103 - 17897.382: 98.2351% ( 12) 00:08:55.200 17897.382 - 18002.660: 98.3808% ( 18) 00:08:55.200 18002.660 - 18107.939: 98.4942% ( 14) 00:08:55.200 18107.939 - 18213.218: 98.5832% ( 11) 00:08:55.200 18213.218 - 18318.496: 98.6642% ( 10) 00:08:55.200 18318.496 - 18423.775: 98.7775% ( 14) 00:08:55.200 18423.775 - 18529.054: 98.8585% ( 10) 00:08:55.200 18529.054 - 18634.333: 98.8909% ( 4) 00:08:55.200 18634.333 - 18739.611: 98.9152% ( 3) 00:08:55.200 18739.611 - 18844.890: 98.9394% ( 3) 00:08:55.200 18844.890 - 18950.169: 98.9718% ( 4) 00:08:55.200 19371.284 - 19476.562: 99.0123% ( 5) 00:08:55.200 19476.562 - 19581.841: 99.0933% ( 10) 00:08:55.200 19581.841 - 19687.120: 99.1580% ( 8) 00:08:55.200 19687.120 - 19792.398: 99.2390% ( 10) 00:08:55.200 19792.398 - 19897.677: 99.3119% ( 9) 00:08:55.200 19897.677 - 20002.956: 99.4171% ( 13) 00:08:55.200 20002.956 - 20108.235: 99.4414% ( 3) 00:08:55.200 20108.235 - 20213.513: 99.4657% ( 3) 00:08:55.200 20213.513 - 20318.792: 99.4819% ( 2) 00:08:55.200 24319.383 - 24424.662: 99.5062% ( 3) 00:08:55.201 24424.662 - 24529.941: 99.5304% ( 3) 00:08:55.201 24529.941 - 24635.219: 99.5628% ( 4) 00:08:55.201 24635.219 - 24740.498: 99.5871% ( 3) 00:08:55.201 24740.498 - 24845.777: 99.6114% ( 3) 00:08:55.201 24845.777 - 24951.055: 99.6357% ( 3) 00:08:55.201 24951.055 - 25056.334: 99.6600% ( 3) 00:08:55.201 25056.334 - 25161.613: 99.6924% ( 4) 00:08:55.201 25161.613 - 25266.892: 99.7166% ( 3) 00:08:55.201 25266.892 - 25372.170: 99.7490% ( 4) 00:08:55.201 25372.170 - 25477.449: 99.7733% ( 3) 00:08:55.201 25477.449 - 25582.728: 99.8138% ( 5) 00:08:55.201 25582.728 - 25688.006: 99.8462% ( 4) 00:08:55.201 25688.006 - 25793.285: 99.8867% ( 5) 00:08:55.201 25793.285 - 25898.564: 99.9190% ( 4) 00:08:55.201 25898.564 - 26003.843: 99.9433% ( 3) 00:08:55.201 26003.843 - 26109.121: 99.9676% ( 3) 00:08:55.201 26109.121 - 26214.400: 99.9919% ( 3) 00:08:55.201 26214.400 - 26319.679: 100.0000% ( 1) 00:08:55.201 00:08:55.201 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:55.201 ============================================================================== 00:08:55.201 Range in us Cumulative IO count 00:08:55.201 7106.313 - 7158.953: 0.0081% ( 1) 00:08:55.201 7211.592 - 7264.231: 0.0243% ( 2) 00:08:55.201 7264.231 - 7316.871: 0.0648% ( 5) 00:08:55.201 7316.871 - 7369.510: 0.2105% ( 18) 00:08:55.201 7369.510 - 7422.149: 0.5343% ( 40) 00:08:55.201 7422.149 - 7474.789: 1.0363% ( 62) 00:08:55.201 7474.789 - 7527.428: 1.9187% ( 109) 00:08:55.201 7527.428 - 7580.067: 3.0278% ( 137) 00:08:55.201 7580.067 - 7632.707: 4.2665% ( 153) 00:08:55.201 7632.707 - 7685.346: 6.0395% ( 219) 00:08:55.201 7685.346 - 7737.986: 7.1729% ( 140) 00:08:55.201 7737.986 - 7790.625: 8.9702% ( 222) 00:08:55.201 7790.625 - 7843.264: 10.5246% ( 192) 00:08:55.201 7843.264 - 7895.904: 12.5162% ( 246) 00:08:55.201 7895.904 - 7948.543: 14.9935% ( 306) 00:08:55.201 7948.543 - 8001.182: 17.1470% ( 266) 00:08:55.201 8001.182 - 8053.822: 19.8996% ( 340) 00:08:55.201 8053.822 - 8106.461: 22.5146% ( 323) 00:08:55.201 8106.461 - 8159.100: 24.7733% ( 279) 00:08:55.201 8159.100 - 8211.740: 27.3964% ( 324) 00:08:55.201 8211.740 - 8264.379: 30.0518% ( 328) 00:08:55.201 8264.379 - 8317.018: 32.8692% ( 348) 00:08:55.201 8317.018 - 8369.658: 36.3423% ( 429) 00:08:55.201 8369.658 - 8422.297: 38.6253% ( 282) 00:08:55.201 8422.297 - 8474.937: 40.7383% ( 261) 00:08:55.201 8474.937 - 8527.576: 42.4466% ( 211) 00:08:55.201 8527.576 - 8580.215: 43.6852% ( 153) 00:08:55.201 8580.215 - 8632.855: 45.0696% ( 171) 00:08:55.201 8632.855 - 8685.494: 46.3164% ( 154) 00:08:55.201 8685.494 - 8738.133: 47.5227% ( 149) 00:08:55.201 8738.133 - 8790.773: 48.5751% ( 130) 00:08:55.201 8790.773 - 8843.412: 49.9838% ( 174) 00:08:55.201 8843.412 - 8896.051: 51.2144% ( 152) 00:08:55.201 8896.051 - 8948.691: 52.3721% ( 143) 00:08:55.201 8948.691 - 9001.330: 53.8617% ( 184) 00:08:55.201 9001.330 - 9053.969: 55.0113% ( 142) 00:08:55.201 9053.969 - 9106.609: 55.9909% ( 121) 00:08:55.201 9106.609 - 9159.248: 56.7600% ( 95) 00:08:55.201 9159.248 - 9211.888: 57.3429% ( 72) 00:08:55.201 9211.888 - 9264.527: 57.9663% ( 77) 00:08:55.201 9264.527 - 9317.166: 58.5492% ( 72) 00:08:55.201 9317.166 - 9369.806: 58.9054% ( 44) 00:08:55.201 9369.806 - 9422.445: 59.3021% ( 49) 00:08:55.201 9422.445 - 9475.084: 59.8365% ( 66) 00:08:55.201 9475.084 - 9527.724: 60.2413% ( 50) 00:08:55.201 9527.724 - 9580.363: 60.6865% ( 55) 00:08:55.201 9580.363 - 9633.002: 60.9699% ( 35) 00:08:55.201 9633.002 - 9685.642: 61.3099% ( 42) 00:08:55.201 9685.642 - 9738.281: 61.6418% ( 41) 00:08:55.201 9738.281 - 9790.920: 61.9171% ( 34) 00:08:55.201 9790.920 - 9843.560: 62.0304% ( 14) 00:08:55.201 9843.560 - 9896.199: 62.1843% ( 19) 00:08:55.201 9896.199 - 9948.839: 62.3300% ( 18) 00:08:55.201 9948.839 - 10001.478: 62.5162% ( 23) 00:08:55.201 10001.478 - 10054.117: 62.6052% ( 11) 00:08:55.201 10054.117 - 10106.757: 62.6862% ( 10) 00:08:55.201 10106.757 - 10159.396: 62.7753% ( 11) 00:08:55.201 10159.396 - 10212.035: 62.8157% ( 5) 00:08:55.201 10212.035 - 10264.675: 62.8481% ( 4) 00:08:55.201 10264.675 - 10317.314: 62.8643% ( 2) 00:08:55.201 10317.314 - 10369.953: 62.8886% ( 3) 00:08:55.201 10369.953 - 10422.593: 62.8967% ( 1) 00:08:55.201 10422.593 - 10475.232: 62.9372% ( 5) 00:08:55.201 10475.232 - 10527.871: 63.0505% ( 14) 00:08:55.201 10527.871 - 10580.511: 63.1396% ( 11) 00:08:55.201 10580.511 - 10633.150: 63.1962% ( 7) 00:08:55.201 10633.150 - 10685.790: 63.2853% ( 11) 00:08:55.201 10685.790 - 10738.429: 63.5120% ( 28) 00:08:55.201 10738.429 - 10791.068: 63.7225% ( 26) 00:08:55.201 10791.068 - 10843.708: 63.8601% ( 17) 00:08:55.201 10843.708 - 10896.347: 63.9896% ( 16) 00:08:55.201 10896.347 - 10948.986: 64.0625% ( 9) 00:08:55.201 10948.986 - 11001.626: 64.1435% ( 10) 00:08:55.201 11001.626 - 11054.265: 64.1920% ( 6) 00:08:55.201 11054.265 - 11106.904: 64.2163% ( 3) 00:08:55.201 11106.904 - 11159.544: 64.2406% ( 3) 00:08:55.201 11159.544 - 11212.183: 64.2973% ( 7) 00:08:55.201 11212.183 - 11264.822: 64.3701% ( 9) 00:08:55.201 11264.822 - 11317.462: 64.4511% ( 10) 00:08:55.201 11317.462 - 11370.101: 64.6130% ( 20) 00:08:55.201 11370.101 - 11422.741: 64.8235% ( 26) 00:08:55.201 11422.741 - 11475.380: 65.0097% ( 23) 00:08:55.201 11475.380 - 11528.019: 65.0907% ( 10) 00:08:55.201 11528.019 - 11580.659: 65.1716% ( 10) 00:08:55.201 11580.659 - 11633.298: 65.2121% ( 5) 00:08:55.201 11633.298 - 11685.937: 65.2202% ( 1) 00:08:55.201 11685.937 - 11738.577: 65.2364% ( 2) 00:08:55.201 11738.577 - 11791.216: 65.2526% ( 2) 00:08:55.201 11791.216 - 11843.855: 65.2688% ( 2) 00:08:55.201 11843.855 - 11896.495: 65.2769% ( 1) 00:08:55.201 11896.495 - 11949.134: 65.3012% ( 3) 00:08:55.201 11949.134 - 12001.773: 65.3174% ( 2) 00:08:55.201 12001.773 - 12054.413: 65.3416% ( 3) 00:08:55.201 12054.413 - 12107.052: 65.3578% ( 2) 00:08:55.201 12107.052 - 12159.692: 65.3821% ( 3) 00:08:55.201 12159.692 - 12212.331: 65.3983% ( 2) 00:08:55.201 12212.331 - 12264.970: 65.4145% ( 2) 00:08:55.201 12264.970 - 12317.610: 65.4955% ( 10) 00:08:55.201 12317.610 - 12370.249: 65.6007% ( 13) 00:08:55.201 12370.249 - 12422.888: 65.7060% ( 13) 00:08:55.201 12422.888 - 12475.528: 65.7950% ( 11) 00:08:55.201 12475.528 - 12528.167: 65.9165% ( 15) 00:08:55.201 12528.167 - 12580.806: 66.0703% ( 19) 00:08:55.201 12580.806 - 12633.446: 66.3212% ( 31) 00:08:55.201 12633.446 - 12686.085: 66.5398% ( 27) 00:08:55.201 12686.085 - 12738.724: 66.8718% ( 41) 00:08:55.201 12738.724 - 12791.364: 67.4304% ( 69) 00:08:55.201 12791.364 - 12844.003: 68.2642% ( 103) 00:08:55.201 12844.003 - 12896.643: 69.1629% ( 111) 00:08:55.201 12896.643 - 12949.282: 70.0777% ( 113) 00:08:55.201 12949.282 - 13001.921: 71.0816% ( 124) 00:08:55.201 13001.921 - 13054.561: 72.0774% ( 123) 00:08:55.201 13054.561 - 13107.200: 72.8303% ( 93) 00:08:55.201 13107.200 - 13159.839: 73.8909% ( 131) 00:08:55.201 13159.839 - 13212.479: 75.0243% ( 140) 00:08:55.201 13212.479 - 13265.118: 76.2468% ( 151) 00:08:55.201 13265.118 - 13317.757: 78.0117% ( 218) 00:08:55.201 13317.757 - 13370.397: 80.0113% ( 247) 00:08:55.201 13370.397 - 13423.036: 81.8329% ( 225) 00:08:55.201 13423.036 - 13475.676: 83.7678% ( 239) 00:08:55.201 13475.676 - 13580.954: 87.3948% ( 448) 00:08:55.201 13580.954 - 13686.233: 90.5036% ( 384) 00:08:55.201 13686.233 - 13791.512: 92.3899% ( 233) 00:08:55.201 13791.512 - 13896.790: 94.0900% ( 210) 00:08:55.201 13896.790 - 14002.069: 95.0777% ( 122) 00:08:55.201 14002.069 - 14107.348: 95.6201% ( 67) 00:08:55.201 14107.348 - 14212.627: 95.9764% ( 44) 00:08:55.201 14212.627 - 14317.905: 96.1788% ( 25) 00:08:55.201 14317.905 - 14423.184: 96.2516% ( 9) 00:08:55.201 14423.184 - 14528.463: 96.3002% ( 6) 00:08:55.201 14528.463 - 14633.741: 96.3488% ( 6) 00:08:55.201 14633.741 - 14739.020: 96.3731% ( 3) 00:08:55.201 14844.299 - 14949.578: 96.3892% ( 2) 00:08:55.201 14949.578 - 15054.856: 96.4864% ( 12) 00:08:55.201 15054.856 - 15160.135: 96.5593% ( 9) 00:08:55.201 15160.135 - 15265.414: 96.7131% ( 19) 00:08:55.201 15265.414 - 15370.692: 96.7617% ( 6) 00:08:55.201 15370.692 - 15475.971: 96.7940% ( 4) 00:08:55.201 15475.971 - 15581.250: 96.8264% ( 4) 00:08:55.201 15581.250 - 15686.529: 96.8588% ( 4) 00:08:55.201 15686.529 - 15791.807: 96.8912% ( 4) 00:08:55.201 15897.086 - 16002.365: 96.8993% ( 1) 00:08:55.201 16107.643 - 16212.922: 96.9074% ( 1) 00:08:55.201 16318.201 - 16423.480: 96.9883% ( 10) 00:08:55.201 16423.480 - 16528.758: 97.0774% ( 11) 00:08:55.201 16528.758 - 16634.037: 97.1584% ( 10) 00:08:55.201 16634.037 - 16739.316: 97.2636% ( 13) 00:08:55.201 16739.316 - 16844.594: 97.3608% ( 12) 00:08:55.201 16844.594 - 16949.873: 97.3931% ( 4) 00:08:55.201 16949.873 - 17055.152: 97.4336% ( 5) 00:08:55.201 17055.152 - 17160.431: 97.4822% ( 6) 00:08:55.201 17160.431 - 17265.709: 97.5470% ( 8) 00:08:55.201 17265.709 - 17370.988: 97.6684% ( 15) 00:08:55.201 17370.988 - 17476.267: 97.9356% ( 33) 00:08:55.201 17476.267 - 17581.545: 98.0408% ( 13) 00:08:55.201 17581.545 - 17686.824: 98.1622% ( 15) 00:08:55.201 17686.824 - 17792.103: 98.3646% ( 25) 00:08:55.201 17792.103 - 17897.382: 98.5266% ( 20) 00:08:55.201 17897.382 - 18002.660: 98.7128% ( 23) 00:08:55.201 18002.660 - 18107.939: 98.8018% ( 11) 00:08:55.201 18107.939 - 18213.218: 98.8666% ( 8) 00:08:55.201 18213.218 - 18318.496: 98.9556% ( 11) 00:08:55.201 18318.496 - 18423.775: 99.0609% ( 13) 00:08:55.201 18423.775 - 18529.054: 99.2228% ( 20) 00:08:55.201 18529.054 - 18634.333: 99.2795% ( 7) 00:08:55.201 18634.333 - 18739.611: 99.3361% ( 7) 00:08:55.201 18739.611 - 18844.890: 99.3604% ( 3) 00:08:55.201 18950.169 - 19055.447: 99.3685% ( 1) 00:08:55.201 19055.447 - 19160.726: 99.4090% ( 5) 00:08:55.201 19160.726 - 19266.005: 99.4414% ( 4) 00:08:55.201 19266.005 - 19371.284: 99.4819% ( 5) 00:08:55.201 23371.875 - 23477.153: 99.5385% ( 7) 00:08:55.201 23477.153 - 23582.432: 99.5952% ( 7) 00:08:55.201 23582.432 - 23687.711: 99.6843% ( 11) 00:08:55.201 23687.711 - 23792.990: 99.8138% ( 16) 00:08:55.202 23792.990 - 23898.268: 99.8705% ( 7) 00:08:55.202 24951.055 - 25056.334: 99.8948% ( 3) 00:08:55.202 25056.334 - 25161.613: 99.9190% ( 3) 00:08:55.202 25161.613 - 25266.892: 99.9433% ( 3) 00:08:55.202 25266.892 - 25372.170: 99.9676% ( 3) 00:08:55.202 25372.170 - 25477.449: 99.9919% ( 3) 00:08:55.202 25477.449 - 25582.728: 100.0000% ( 1) 00:08:55.202 00:08:55.202 15:13:37 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:55.202 00:08:55.202 real 0m2.722s 00:08:55.202 user 0m2.301s 00:08:55.202 sys 0m0.305s 00:08:55.202 15:13:37 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.202 15:13:37 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:55.202 ************************************ 00:08:55.202 END TEST nvme_perf 00:08:55.202 ************************************ 00:08:55.202 15:13:37 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:55.202 15:13:37 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:55.202 15:13:37 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.202 15:13:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:55.202 ************************************ 00:08:55.202 START TEST nvme_hello_world 00:08:55.202 ************************************ 00:08:55.202 15:13:37 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:55.461 Initializing NVMe Controllers 00:08:55.461 Attached to 0000:00:10.0 00:08:55.461 Namespace ID: 1 size: 6GB 00:08:55.461 Attached to 0000:00:11.0 00:08:55.461 Namespace ID: 1 size: 5GB 00:08:55.461 Attached to 0000:00:13.0 00:08:55.461 Namespace ID: 1 size: 1GB 00:08:55.461 Attached to 0000:00:12.0 00:08:55.461 Namespace ID: 1 size: 4GB 00:08:55.461 Namespace ID: 2 size: 4GB 00:08:55.461 Namespace ID: 3 size: 4GB 00:08:55.461 Initialization complete. 00:08:55.461 INFO: using host memory buffer for IO 00:08:55.461 Hello world! 00:08:55.461 INFO: using host memory buffer for IO 00:08:55.461 Hello world! 00:08:55.461 INFO: using host memory buffer for IO 00:08:55.461 Hello world! 00:08:55.461 INFO: using host memory buffer for IO 00:08:55.461 Hello world! 00:08:55.461 INFO: using host memory buffer for IO 00:08:55.461 Hello world! 00:08:55.461 INFO: using host memory buffer for IO 00:08:55.461 Hello world! 00:08:55.461 ************************************ 00:08:55.461 END TEST nvme_hello_world 00:08:55.461 ************************************ 00:08:55.461 00:08:55.461 real 0m0.326s 00:08:55.461 user 0m0.125s 00:08:55.461 sys 0m0.157s 00:08:55.461 15:13:38 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.461 15:13:38 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:55.461 15:13:38 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:55.461 15:13:38 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.461 15:13:38 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.461 15:13:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:55.461 ************************************ 00:08:55.461 START TEST nvme_sgl 00:08:55.461 ************************************ 00:08:55.461 15:13:38 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:55.719 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:55.719 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:55.719 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:55.719 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:55.719 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:55.719 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:55.719 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:55.719 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:55.978 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:55.978 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:55.978 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:55.978 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:55.978 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:55.978 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:55.978 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:55.978 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:55.978 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:55.978 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:55.978 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:55.978 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:55.978 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:55.978 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:55.978 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:55.978 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:55.978 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:55.978 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:55.978 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:55.978 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:55.978 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:55.978 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:55.978 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:55.978 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:55.978 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:55.978 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:55.978 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:55.978 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:55.978 NVMe Readv/Writev Request test 00:08:55.978 Attached to 0000:00:10.0 00:08:55.978 Attached to 0000:00:11.0 00:08:55.978 Attached to 0000:00:13.0 00:08:55.978 Attached to 0000:00:12.0 00:08:55.978 0000:00:10.0: build_io_request_2 test passed 00:08:55.978 0000:00:10.0: build_io_request_4 test passed 00:08:55.978 0000:00:10.0: build_io_request_5 test passed 00:08:55.978 0000:00:10.0: build_io_request_6 test passed 00:08:55.978 0000:00:10.0: build_io_request_7 test passed 00:08:55.978 0000:00:10.0: build_io_request_10 test passed 00:08:55.978 0000:00:11.0: build_io_request_2 test passed 00:08:55.978 0000:00:11.0: build_io_request_4 test passed 00:08:55.978 0000:00:11.0: build_io_request_5 test passed 00:08:55.978 0000:00:11.0: build_io_request_6 test passed 00:08:55.978 0000:00:11.0: build_io_request_7 test passed 00:08:55.978 0000:00:11.0: build_io_request_10 test passed 00:08:55.978 Cleaning up... 00:08:55.978 00:08:55.978 real 0m0.397s 00:08:55.978 user 0m0.181s 00:08:55.978 sys 0m0.169s 00:08:55.978 15:13:38 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.978 15:13:38 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:55.978 ************************************ 00:08:55.978 END TEST nvme_sgl 00:08:55.978 ************************************ 00:08:55.978 15:13:38 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:55.978 15:13:38 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.978 15:13:38 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.978 15:13:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:55.978 ************************************ 00:08:55.978 START TEST nvme_e2edp 00:08:55.978 ************************************ 00:08:55.978 15:13:38 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:56.237 NVMe Write/Read with End-to-End data protection test 00:08:56.237 Attached to 0000:00:10.0 00:08:56.237 Attached to 0000:00:11.0 00:08:56.237 Attached to 0000:00:13.0 00:08:56.237 Attached to 0000:00:12.0 00:08:56.237 Cleaning up... 00:08:56.237 00:08:56.237 real 0m0.300s 00:08:56.237 user 0m0.103s 00:08:56.237 sys 0m0.156s 00:08:56.237 15:13:38 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.237 15:13:38 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:56.237 ************************************ 00:08:56.237 END TEST nvme_e2edp 00:08:56.237 ************************************ 00:08:56.496 15:13:38 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:56.496 15:13:38 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:56.496 15:13:38 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.496 15:13:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:56.496 ************************************ 00:08:56.496 START TEST nvme_reserve 00:08:56.496 ************************************ 00:08:56.496 15:13:38 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:56.754 ===================================================== 00:08:56.754 NVMe Controller at PCI bus 0, device 16, function 0 00:08:56.754 ===================================================== 00:08:56.754 Reservations: Not Supported 00:08:56.754 ===================================================== 00:08:56.754 NVMe Controller at PCI bus 0, device 17, function 0 00:08:56.754 ===================================================== 00:08:56.754 Reservations: Not Supported 00:08:56.754 ===================================================== 00:08:56.754 NVMe Controller at PCI bus 0, device 19, function 0 00:08:56.754 ===================================================== 00:08:56.754 Reservations: Not Supported 00:08:56.754 ===================================================== 00:08:56.754 NVMe Controller at PCI bus 0, device 18, function 0 00:08:56.754 ===================================================== 00:08:56.754 Reservations: Not Supported 00:08:56.754 Reservation test passed 00:08:56.754 00:08:56.754 real 0m0.304s 00:08:56.754 user 0m0.097s 00:08:56.754 sys 0m0.159s 00:08:56.754 ************************************ 00:08:56.754 END TEST nvme_reserve 00:08:56.754 ************************************ 00:08:56.754 15:13:39 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.754 15:13:39 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:56.754 15:13:39 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:56.754 15:13:39 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:56.754 15:13:39 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.754 15:13:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:56.754 ************************************ 00:08:56.754 START TEST nvme_err_injection 00:08:56.754 ************************************ 00:08:56.754 15:13:39 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:57.012 NVMe Error Injection test 00:08:57.012 Attached to 0000:00:10.0 00:08:57.012 Attached to 0000:00:11.0 00:08:57.012 Attached to 0000:00:13.0 00:08:57.012 Attached to 0000:00:12.0 00:08:57.012 0000:00:13.0: get features failed as expected 00:08:57.012 0000:00:12.0: get features failed as expected 00:08:57.012 0000:00:10.0: get features failed as expected 00:08:57.012 0000:00:11.0: get features failed as expected 00:08:57.012 0000:00:12.0: get features successfully as expected 00:08:57.012 0000:00:10.0: get features successfully as expected 00:08:57.012 0000:00:11.0: get features successfully as expected 00:08:57.012 0000:00:13.0: get features successfully as expected 00:08:57.012 0000:00:12.0: read failed as expected 00:08:57.012 0000:00:10.0: read failed as expected 00:08:57.012 0000:00:11.0: read failed as expected 00:08:57.012 0000:00:13.0: read failed as expected 00:08:57.012 0000:00:12.0: read successfully as expected 00:08:57.012 0000:00:10.0: read successfully as expected 00:08:57.012 0000:00:11.0: read successfully as expected 00:08:57.012 0000:00:13.0: read successfully as expected 00:08:57.012 Cleaning up... 00:08:57.012 00:08:57.012 real 0m0.324s 00:08:57.012 user 0m0.120s 00:08:57.012 sys 0m0.159s 00:08:57.012 15:13:39 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.012 15:13:39 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:57.012 ************************************ 00:08:57.012 END TEST nvme_err_injection 00:08:57.012 ************************************ 00:08:57.271 15:13:39 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:57.271 15:13:39 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:57.271 15:13:39 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.271 15:13:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.271 ************************************ 00:08:57.271 START TEST nvme_overhead 00:08:57.271 ************************************ 00:08:57.271 15:13:39 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:58.672 Initializing NVMe Controllers 00:08:58.672 Attached to 0000:00:10.0 00:08:58.672 Attached to 0000:00:11.0 00:08:58.672 Attached to 0000:00:13.0 00:08:58.672 Attached to 0000:00:12.0 00:08:58.672 Initialization complete. Launching workers. 00:08:58.672 submit (in ns) avg, min, max = 14294.5, 11579.1, 96694.8 00:08:58.672 complete (in ns) avg, min, max = 9383.6, 7906.8, 119476.3 00:08:58.672 00:08:58.672 Submit histogram 00:08:58.672 ================ 00:08:58.672 Range in us Cumulative Count 00:08:58.672 11.566 - 11.618: 0.0128% ( 1) 00:08:58.672 11.823 - 11.875: 0.0383% ( 2) 00:08:58.672 11.926 - 11.978: 0.0510% ( 1) 00:08:58.672 11.978 - 12.029: 0.0765% ( 2) 00:08:58.672 12.029 - 12.080: 0.0893% ( 1) 00:08:58.672 12.132 - 12.183: 0.1020% ( 1) 00:08:58.672 12.235 - 12.286: 0.1148% ( 1) 00:08:58.672 12.440 - 12.492: 0.1276% ( 1) 00:08:58.672 12.492 - 12.543: 0.1403% ( 1) 00:08:58.672 12.543 - 12.594: 0.1531% ( 1) 00:08:58.672 12.594 - 12.646: 0.1658% ( 1) 00:08:58.672 12.646 - 12.697: 0.2041% ( 3) 00:08:58.672 12.697 - 12.749: 0.3444% ( 11) 00:08:58.673 12.749 - 12.800: 0.5995% ( 20) 00:08:58.673 12.800 - 12.851: 0.8929% ( 23) 00:08:58.673 12.851 - 12.903: 1.1862% ( 23) 00:08:58.673 12.903 - 12.954: 1.3903% ( 16) 00:08:58.673 12.954 - 13.006: 1.6964% ( 24) 00:08:58.673 13.006 - 13.057: 2.2704% ( 45) 00:08:58.673 13.057 - 13.108: 2.9592% ( 54) 00:08:58.673 13.108 - 13.160: 4.1327% ( 92) 00:08:58.673 13.160 - 13.263: 7.3724% ( 254) 00:08:58.673 13.263 - 13.365: 11.8750% ( 353) 00:08:58.673 13.365 - 13.468: 18.7883% ( 542) 00:08:58.673 13.468 - 13.571: 26.4413% ( 600) 00:08:58.673 13.571 - 13.674: 36.0842% ( 756) 00:08:58.673 13.674 - 13.777: 47.7806% ( 917) 00:08:58.673 13.777 - 13.880: 60.0510% ( 962) 00:08:58.673 13.880 - 13.982: 70.4337% ( 814) 00:08:58.673 13.982 - 14.085: 78.3546% ( 621) 00:08:58.673 14.085 - 14.188: 83.6352% ( 414) 00:08:58.673 14.188 - 14.291: 86.7857% ( 247) 00:08:58.673 14.291 - 14.394: 89.1709% ( 187) 00:08:58.673 14.394 - 14.496: 90.5867% ( 111) 00:08:58.673 14.496 - 14.599: 91.2372% ( 51) 00:08:58.673 14.599 - 14.702: 91.7474% ( 40) 00:08:58.673 14.702 - 14.805: 92.2066% ( 36) 00:08:58.673 14.805 - 14.908: 92.4872% ( 22) 00:08:58.673 14.908 - 15.010: 92.6020% ( 9) 00:08:58.673 15.010 - 15.113: 92.6658% ( 5) 00:08:58.673 15.113 - 15.216: 92.7806% ( 9) 00:08:58.673 15.216 - 15.319: 92.8699% ( 7) 00:08:58.673 15.319 - 15.422: 92.9337% ( 5) 00:08:58.673 15.422 - 15.524: 92.9974% ( 5) 00:08:58.673 15.524 - 15.627: 93.1250% ( 10) 00:08:58.673 15.627 - 15.730: 93.1888% ( 5) 00:08:58.673 15.730 - 15.833: 93.2270% ( 3) 00:08:58.673 15.833 - 15.936: 93.2398% ( 1) 00:08:58.673 15.936 - 16.039: 93.2908% ( 4) 00:08:58.673 16.039 - 16.141: 93.3036% ( 1) 00:08:58.673 16.141 - 16.244: 93.3291% ( 2) 00:08:58.673 16.244 - 16.347: 93.3546% ( 2) 00:08:58.673 16.347 - 16.450: 93.4056% ( 4) 00:08:58.673 16.450 - 16.553: 93.4694% ( 5) 00:08:58.673 16.553 - 16.655: 93.5204% ( 4) 00:08:58.673 16.655 - 16.758: 93.5332% ( 1) 00:08:58.673 16.758 - 16.861: 93.5459% ( 1) 00:08:58.673 16.861 - 16.964: 93.6097% ( 5) 00:08:58.673 16.964 - 17.067: 93.6862% ( 6) 00:08:58.673 17.067 - 17.169: 93.7245% ( 3) 00:08:58.673 17.169 - 17.272: 93.7883% ( 5) 00:08:58.673 17.272 - 17.375: 93.8520% ( 5) 00:08:58.673 17.375 - 17.478: 93.8903% ( 3) 00:08:58.673 17.478 - 17.581: 93.9923% ( 8) 00:08:58.673 17.581 - 17.684: 94.1327% ( 11) 00:08:58.673 17.684 - 17.786: 94.3367% ( 16) 00:08:58.673 17.786 - 17.889: 94.4898% ( 12) 00:08:58.673 17.889 - 17.992: 94.6046% ( 9) 00:08:58.673 17.992 - 18.095: 94.8087% ( 16) 00:08:58.673 18.095 - 18.198: 94.9872% ( 14) 00:08:58.673 18.198 - 18.300: 95.1658% ( 14) 00:08:58.673 18.300 - 18.403: 95.3699% ( 16) 00:08:58.673 18.403 - 18.506: 95.5102% ( 11) 00:08:58.673 18.506 - 18.609: 95.6505% ( 11) 00:08:58.673 18.609 - 18.712: 95.7270% ( 6) 00:08:58.673 18.712 - 18.814: 95.7781% ( 4) 00:08:58.673 18.814 - 18.917: 95.9184% ( 11) 00:08:58.673 18.917 - 19.020: 96.0204% ( 8) 00:08:58.673 19.020 - 19.123: 96.1352% ( 9) 00:08:58.673 19.123 - 19.226: 96.2245% ( 7) 00:08:58.673 19.226 - 19.329: 96.2883% ( 5) 00:08:58.673 19.329 - 19.431: 96.3265% ( 3) 00:08:58.673 19.431 - 19.534: 96.4158% ( 7) 00:08:58.673 19.534 - 19.637: 96.4668% ( 4) 00:08:58.673 19.637 - 19.740: 96.5689% ( 8) 00:08:58.673 19.740 - 19.843: 96.6199% ( 4) 00:08:58.673 19.843 - 19.945: 96.6964% ( 6) 00:08:58.673 19.945 - 20.048: 96.8495% ( 12) 00:08:58.673 20.048 - 20.151: 96.9515% ( 8) 00:08:58.673 20.151 - 20.254: 97.0663% ( 9) 00:08:58.673 20.254 - 20.357: 97.1684% ( 8) 00:08:58.673 20.357 - 20.459: 97.2577% ( 7) 00:08:58.673 20.459 - 20.562: 97.3214% ( 5) 00:08:58.673 20.562 - 20.665: 97.4362% ( 9) 00:08:58.673 20.665 - 20.768: 97.5000% ( 5) 00:08:58.673 20.768 - 20.871: 97.6276% ( 10) 00:08:58.673 20.871 - 20.973: 97.7168% ( 7) 00:08:58.673 20.973 - 21.076: 97.8444% ( 10) 00:08:58.673 21.076 - 21.179: 97.9464% ( 8) 00:08:58.673 21.179 - 21.282: 97.9974% ( 4) 00:08:58.673 21.282 - 21.385: 98.0995% ( 8) 00:08:58.673 21.385 - 21.488: 98.1505% ( 4) 00:08:58.673 21.488 - 21.590: 98.1760% ( 2) 00:08:58.673 21.590 - 21.693: 98.2270% ( 4) 00:08:58.673 21.693 - 21.796: 98.2526% ( 2) 00:08:58.673 21.796 - 21.899: 98.2653% ( 1) 00:08:58.673 21.899 - 22.002: 98.3036% ( 3) 00:08:58.673 22.002 - 22.104: 98.3291% ( 2) 00:08:58.673 22.104 - 22.207: 98.3673% ( 3) 00:08:58.673 22.207 - 22.310: 98.3929% ( 2) 00:08:58.673 22.310 - 22.413: 98.4311% ( 3) 00:08:58.673 22.516 - 22.618: 98.4566% ( 2) 00:08:58.673 22.618 - 22.721: 98.4694% ( 1) 00:08:58.673 22.824 - 22.927: 98.4821% ( 1) 00:08:58.673 23.133 - 23.235: 98.4949% ( 1) 00:08:58.673 23.235 - 23.338: 98.5077% ( 1) 00:08:58.673 23.338 - 23.441: 98.5204% ( 1) 00:08:58.673 23.544 - 23.647: 98.5459% ( 2) 00:08:58.673 23.647 - 23.749: 98.5714% ( 2) 00:08:58.673 23.852 - 23.955: 98.5969% ( 2) 00:08:58.673 23.955 - 24.058: 98.6224% ( 2) 00:08:58.673 24.161 - 24.263: 98.6352% ( 1) 00:08:58.673 24.263 - 24.366: 98.6607% ( 2) 00:08:58.673 24.366 - 24.469: 98.7117% ( 4) 00:08:58.673 24.469 - 24.572: 98.7755% ( 5) 00:08:58.673 24.572 - 24.675: 98.8520% ( 6) 00:08:58.673 24.675 - 24.778: 98.9158% ( 5) 00:08:58.673 24.778 - 24.880: 98.9923% ( 6) 00:08:58.673 24.880 - 24.983: 99.0561% ( 5) 00:08:58.673 24.983 - 25.086: 99.0816% ( 2) 00:08:58.673 25.086 - 25.189: 99.0944% ( 1) 00:08:58.673 25.189 - 25.292: 99.1071% ( 1) 00:08:58.673 25.394 - 25.497: 99.1582% ( 4) 00:08:58.673 25.600 - 25.703: 99.1837% ( 2) 00:08:58.673 25.703 - 25.806: 99.2219% ( 3) 00:08:58.673 25.908 - 26.011: 99.2347% ( 1) 00:08:58.673 26.011 - 26.114: 99.2474% ( 1) 00:08:58.673 26.217 - 26.320: 99.2602% ( 1) 00:08:58.673 26.525 - 26.731: 99.2857% ( 2) 00:08:58.673 26.731 - 26.937: 99.2985% ( 1) 00:08:58.673 27.965 - 28.170: 99.3112% ( 1) 00:08:58.673 28.582 - 28.787: 99.3240% ( 1) 00:08:58.673 28.787 - 28.993: 99.3495% ( 2) 00:08:58.673 28.993 - 29.198: 99.3622% ( 1) 00:08:58.673 29.198 - 29.404: 99.4005% ( 3) 00:08:58.673 29.404 - 29.610: 99.4133% ( 1) 00:08:58.673 29.610 - 29.815: 99.4643% ( 4) 00:08:58.673 29.815 - 30.021: 99.6046% ( 11) 00:08:58.673 30.021 - 30.227: 99.6811% ( 6) 00:08:58.673 30.227 - 30.432: 99.7194% ( 3) 00:08:58.673 30.432 - 30.638: 99.7321% ( 1) 00:08:58.673 31.049 - 31.255: 99.7704% ( 3) 00:08:58.673 31.871 - 32.077: 99.7832% ( 1) 00:08:58.673 32.488 - 32.694: 99.7959% ( 1) 00:08:58.673 33.516 - 33.722: 99.8087% ( 1) 00:08:58.673 37.218 - 37.423: 99.8214% ( 1) 00:08:58.673 37.629 - 37.835: 99.8342% ( 1) 00:08:58.673 39.480 - 39.685: 99.8469% ( 1) 00:08:58.673 41.124 - 41.330: 99.8597% ( 1) 00:08:58.673 41.330 - 41.536: 99.8724% ( 1) 00:08:58.673 41.947 - 42.153: 99.8852% ( 1) 00:08:58.673 49.761 - 49.966: 99.8980% ( 1) 00:08:58.673 51.406 - 51.611: 99.9107% ( 1) 00:08:58.673 53.051 - 53.462: 99.9235% ( 1) 00:08:58.673 53.873 - 54.284: 99.9362% ( 1) 00:08:58.673 54.284 - 54.696: 99.9490% ( 1) 00:08:58.673 59.219 - 59.631: 99.9617% ( 1) 00:08:58.673 67.444 - 67.855: 99.9745% ( 1) 00:08:58.673 71.557 - 71.968: 99.9872% ( 1) 00:08:58.673 96.643 - 97.054: 100.0000% ( 1) 00:08:58.674 00:08:58.674 Complete histogram 00:08:58.674 ================== 00:08:58.674 Range in us Cumulative Count 00:08:58.674 7.865 - 7.916: 0.0128% ( 1) 00:08:58.674 7.916 - 7.968: 0.0383% ( 2) 00:08:58.674 7.968 - 8.019: 0.0638% ( 2) 00:08:58.674 8.019 - 8.071: 0.1148% ( 4) 00:08:58.674 8.071 - 8.122: 0.1403% ( 2) 00:08:58.674 8.122 - 8.173: 0.2041% ( 5) 00:08:58.674 8.173 - 8.225: 0.2423% ( 3) 00:08:58.674 8.225 - 8.276: 0.3189% ( 6) 00:08:58.674 8.276 - 8.328: 0.8929% ( 45) 00:08:58.674 8.328 - 8.379: 2.6913% ( 141) 00:08:58.674 8.379 - 8.431: 5.9821% ( 258) 00:08:58.674 8.431 - 8.482: 9.3750% ( 266) 00:08:58.674 8.482 - 8.533: 13.4439% ( 319) 00:08:58.674 8.533 - 8.585: 18.3673% ( 386) 00:08:58.674 8.585 - 8.636: 26.4541% ( 634) 00:08:58.674 8.636 - 8.688: 35.0510% ( 674) 00:08:58.674 8.688 - 8.739: 42.5893% ( 591) 00:08:58.674 8.739 - 8.790: 49.8980% ( 573) 00:08:58.674 8.790 - 8.842: 56.8878% ( 548) 00:08:58.674 8.842 - 8.893: 62.1046% ( 409) 00:08:58.674 8.893 - 8.945: 66.0969% ( 313) 00:08:58.674 8.945 - 8.996: 68.8776% ( 218) 00:08:58.674 8.996 - 9.047: 70.7398% ( 146) 00:08:58.674 9.047 - 9.099: 72.5128% ( 139) 00:08:58.674 9.099 - 9.150: 74.0434% ( 120) 00:08:58.674 9.150 - 9.202: 74.9617% ( 72) 00:08:58.674 9.202 - 9.253: 75.9439% ( 77) 00:08:58.674 9.253 - 9.304: 77.2704% ( 104) 00:08:58.674 9.304 - 9.356: 78.4184% ( 90) 00:08:58.674 9.356 - 9.407: 79.6173% ( 94) 00:08:58.674 9.407 - 9.459: 80.9439% ( 104) 00:08:58.674 9.459 - 9.510: 82.0918% ( 90) 00:08:58.674 9.510 - 9.561: 83.3673% ( 100) 00:08:58.674 9.561 - 9.613: 84.4898% ( 88) 00:08:58.674 9.613 - 9.664: 85.5867% ( 86) 00:08:58.674 9.664 - 9.716: 86.6709% ( 85) 00:08:58.674 9.716 - 9.767: 87.5893% ( 72) 00:08:58.674 9.767 - 9.818: 88.4821% ( 70) 00:08:58.674 9.818 - 9.870: 89.3878% ( 71) 00:08:58.674 9.870 - 9.921: 90.1403% ( 59) 00:08:58.674 9.921 - 9.973: 90.8163% ( 53) 00:08:58.674 9.973 - 10.024: 91.2883% ( 37) 00:08:58.674 10.024 - 10.076: 91.6582% ( 29) 00:08:58.674 10.076 - 10.127: 92.1046% ( 35) 00:08:58.674 10.127 - 10.178: 92.5638% ( 36) 00:08:58.674 10.178 - 10.230: 92.9209% ( 28) 00:08:58.674 10.230 - 10.281: 93.2143% ( 23) 00:08:58.674 10.281 - 10.333: 93.3929% ( 14) 00:08:58.674 10.333 - 10.384: 93.6607% ( 21) 00:08:58.674 10.384 - 10.435: 93.9413% ( 22) 00:08:58.674 10.435 - 10.487: 94.1454% ( 16) 00:08:58.674 10.487 - 10.538: 94.4388% ( 23) 00:08:58.674 10.538 - 10.590: 94.5791% ( 11) 00:08:58.674 10.590 - 10.641: 94.7194% ( 11) 00:08:58.674 10.641 - 10.692: 94.8469% ( 10) 00:08:58.674 10.692 - 10.744: 95.0000% ( 12) 00:08:58.674 10.744 - 10.795: 95.1148% ( 9) 00:08:58.674 10.795 - 10.847: 95.2168% ( 8) 00:08:58.674 10.847 - 10.898: 95.2551% ( 3) 00:08:58.674 10.898 - 10.949: 95.2806% ( 2) 00:08:58.674 10.949 - 11.001: 95.3061% ( 2) 00:08:58.674 11.001 - 11.052: 95.3571% ( 4) 00:08:58.674 11.104 - 11.155: 95.3954% ( 3) 00:08:58.674 11.155 - 11.206: 95.4082% ( 1) 00:08:58.674 11.206 - 11.258: 95.4464% ( 3) 00:08:58.674 11.258 - 11.309: 95.4592% ( 1) 00:08:58.674 11.309 - 11.361: 95.4719% ( 1) 00:08:58.674 11.361 - 11.412: 95.4847% ( 1) 00:08:58.674 11.412 - 11.463: 95.5230% ( 3) 00:08:58.674 11.463 - 11.515: 95.5357% ( 1) 00:08:58.674 11.515 - 11.566: 95.5740% ( 3) 00:08:58.674 11.669 - 11.720: 95.6122% ( 3) 00:08:58.674 11.720 - 11.772: 95.6250% ( 1) 00:08:58.674 11.772 - 11.823: 95.6378% ( 1) 00:08:58.674 11.823 - 11.875: 95.6633% ( 2) 00:08:58.674 11.926 - 11.978: 95.6888% ( 2) 00:08:58.674 11.978 - 12.029: 95.7143% ( 2) 00:08:58.674 12.080 - 12.132: 95.7270% ( 1) 00:08:58.674 12.132 - 12.183: 95.7526% ( 2) 00:08:58.674 12.183 - 12.235: 95.7781% ( 2) 00:08:58.674 12.286 - 12.337: 95.8036% ( 2) 00:08:58.674 12.337 - 12.389: 95.8418% ( 3) 00:08:58.674 12.389 - 12.440: 95.8546% ( 1) 00:08:58.674 12.543 - 12.594: 95.8801% ( 2) 00:08:58.674 12.594 - 12.646: 95.8929% ( 1) 00:08:58.674 12.697 - 12.749: 95.9694% ( 6) 00:08:58.674 12.749 - 12.800: 95.9949% ( 2) 00:08:58.674 12.800 - 12.851: 96.0077% ( 1) 00:08:58.674 12.851 - 12.903: 96.0332% ( 2) 00:08:58.674 12.954 - 13.006: 96.0587% ( 2) 00:08:58.674 13.006 - 13.057: 96.0842% ( 2) 00:08:58.674 13.057 - 13.108: 96.0969% ( 1) 00:08:58.674 13.160 - 13.263: 96.1480% ( 4) 00:08:58.674 13.468 - 13.571: 96.1607% ( 1) 00:08:58.674 13.571 - 13.674: 96.1862% ( 2) 00:08:58.674 13.674 - 13.777: 96.1990% ( 1) 00:08:58.674 13.777 - 13.880: 96.2245% ( 2) 00:08:58.674 13.982 - 14.085: 96.2628% ( 3) 00:08:58.674 14.085 - 14.188: 96.2755% ( 1) 00:08:58.674 14.291 - 14.394: 96.2883% ( 1) 00:08:58.674 14.394 - 14.496: 96.3776% ( 7) 00:08:58.674 14.496 - 14.599: 96.3903% ( 1) 00:08:58.674 14.599 - 14.702: 96.4541% ( 5) 00:08:58.674 14.702 - 14.805: 96.5689% ( 9) 00:08:58.674 14.805 - 14.908: 96.6709% ( 8) 00:08:58.674 14.908 - 15.010: 96.6837% ( 1) 00:08:58.674 15.010 - 15.113: 96.7347% ( 4) 00:08:58.674 15.113 - 15.216: 96.8495% ( 9) 00:08:58.674 15.216 - 15.319: 96.8622% ( 1) 00:08:58.674 15.319 - 15.422: 96.9133% ( 4) 00:08:58.674 15.422 - 15.524: 96.9770% ( 5) 00:08:58.674 15.524 - 15.627: 97.0536% ( 6) 00:08:58.674 15.627 - 15.730: 97.1173% ( 5) 00:08:58.674 15.730 - 15.833: 97.2066% ( 7) 00:08:58.674 15.833 - 15.936: 97.2959% ( 7) 00:08:58.674 15.936 - 16.039: 97.3597% ( 5) 00:08:58.674 16.039 - 16.141: 97.4235% ( 5) 00:08:58.674 16.141 - 16.244: 97.4872% ( 5) 00:08:58.674 16.244 - 16.347: 97.5128% ( 2) 00:08:58.674 16.347 - 16.450: 97.5383% ( 2) 00:08:58.674 16.450 - 16.553: 97.6020% ( 5) 00:08:58.674 16.553 - 16.655: 97.6276% ( 2) 00:08:58.674 16.655 - 16.758: 97.6658% ( 3) 00:08:58.674 16.758 - 16.861: 97.7041% ( 3) 00:08:58.674 16.861 - 16.964: 97.7168% ( 1) 00:08:58.674 16.964 - 17.067: 97.7296% ( 1) 00:08:58.674 17.067 - 17.169: 97.7551% ( 2) 00:08:58.674 17.272 - 17.375: 97.7806% ( 2) 00:08:58.674 17.478 - 17.581: 97.8061% ( 2) 00:08:58.674 17.581 - 17.684: 97.8316% ( 2) 00:08:58.674 17.786 - 17.889: 97.8699% ( 3) 00:08:58.674 17.889 - 17.992: 97.8827% ( 1) 00:08:58.674 17.992 - 18.095: 97.9082% ( 2) 00:08:58.674 18.095 - 18.198: 97.9464% ( 3) 00:08:58.674 18.198 - 18.300: 97.9847% ( 3) 00:08:58.674 18.300 - 18.403: 97.9974% ( 1) 00:08:58.674 18.403 - 18.506: 98.0230% ( 2) 00:08:58.674 18.506 - 18.609: 98.0357% ( 1) 00:08:58.674 18.609 - 18.712: 98.0485% ( 1) 00:08:58.675 18.814 - 18.917: 98.0612% ( 1) 00:08:58.675 18.917 - 19.020: 98.0995% ( 3) 00:08:58.675 19.020 - 19.123: 98.1378% ( 3) 00:08:58.675 19.123 - 19.226: 98.2143% ( 6) 00:08:58.675 19.226 - 19.329: 98.2781% ( 5) 00:08:58.675 19.329 - 19.431: 98.4184% ( 11) 00:08:58.675 19.431 - 19.534: 98.5842% ( 13) 00:08:58.675 19.534 - 19.637: 98.6352% ( 4) 00:08:58.675 19.637 - 19.740: 98.6735% ( 3) 00:08:58.675 19.740 - 19.843: 98.6990% ( 2) 00:08:58.675 19.843 - 19.945: 98.7372% ( 3) 00:08:58.675 19.945 - 20.048: 98.7883% ( 4) 00:08:58.675 20.048 - 20.151: 98.8138% ( 2) 00:08:58.675 20.151 - 20.254: 98.8520% ( 3) 00:08:58.675 20.254 - 20.357: 98.9031% ( 4) 00:08:58.675 20.357 - 20.459: 98.9541% ( 4) 00:08:58.675 20.459 - 20.562: 99.0179% ( 5) 00:08:58.675 20.562 - 20.665: 99.0306% ( 1) 00:08:58.675 20.665 - 20.768: 99.0689% ( 3) 00:08:58.675 20.871 - 20.973: 99.0816% ( 1) 00:08:58.675 20.973 - 21.076: 99.0944% ( 1) 00:08:58.675 21.282 - 21.385: 99.1327% ( 3) 00:08:58.675 21.385 - 21.488: 99.1454% ( 1) 00:08:58.675 21.488 - 21.590: 99.1582% ( 1) 00:08:58.675 21.590 - 21.693: 99.1837% ( 2) 00:08:58.675 21.693 - 21.796: 99.1964% ( 1) 00:08:58.675 21.899 - 22.002: 99.2219% ( 2) 00:08:58.675 22.002 - 22.104: 99.2347% ( 1) 00:08:58.675 22.207 - 22.310: 99.2474% ( 1) 00:08:58.675 22.310 - 22.413: 99.2730% ( 2) 00:08:58.675 23.133 - 23.235: 99.2985% ( 2) 00:08:58.675 23.235 - 23.338: 99.3112% ( 1) 00:08:58.675 23.338 - 23.441: 99.3240% ( 1) 00:08:58.675 23.647 - 23.749: 99.3367% ( 1) 00:08:58.675 23.852 - 23.955: 99.3495% ( 1) 00:08:58.675 24.366 - 24.469: 99.3622% ( 1) 00:08:58.675 24.469 - 24.572: 99.3878% ( 2) 00:08:58.675 24.572 - 24.675: 99.4388% ( 4) 00:08:58.675 24.675 - 24.778: 99.5026% ( 5) 00:08:58.675 24.778 - 24.880: 99.5536% ( 4) 00:08:58.675 24.880 - 24.983: 99.5791% ( 2) 00:08:58.675 24.983 - 25.086: 99.6173% ( 3) 00:08:58.675 25.086 - 25.189: 99.6429% ( 2) 00:08:58.675 25.189 - 25.292: 99.6811% ( 3) 00:08:58.675 25.292 - 25.394: 99.6939% ( 1) 00:08:58.675 25.394 - 25.497: 99.7066% ( 1) 00:08:58.675 25.703 - 25.806: 99.7194% ( 1) 00:08:58.675 26.011 - 26.114: 99.7321% ( 1) 00:08:58.675 26.114 - 26.217: 99.7449% ( 1) 00:08:58.675 26.320 - 26.525: 99.7577% ( 1) 00:08:58.675 26.731 - 26.937: 99.7704% ( 1) 00:08:58.675 26.937 - 27.142: 99.7832% ( 1) 00:08:58.675 27.348 - 27.553: 99.8087% ( 2) 00:08:58.675 29.610 - 29.815: 99.8214% ( 1) 00:08:58.675 29.815 - 30.021: 99.8469% ( 2) 00:08:58.675 31.049 - 31.255: 99.8597% ( 1) 00:08:58.675 34.956 - 35.161: 99.8724% ( 1) 00:08:58.675 35.367 - 35.573: 99.8852% ( 1) 00:08:58.675 39.891 - 40.096: 99.8980% ( 1) 00:08:58.675 41.330 - 41.536: 99.9107% ( 1) 00:08:58.675 42.564 - 42.769: 99.9235% ( 1) 00:08:58.675 43.181 - 43.386: 99.9362% ( 1) 00:08:58.675 43.386 - 43.592: 99.9490% ( 1) 00:08:58.675 45.237 - 45.443: 99.9617% ( 1) 00:08:58.675 55.518 - 55.929: 99.9745% ( 1) 00:08:58.675 111.859 - 112.681: 99.9872% ( 1) 00:08:58.675 119.261 - 120.084: 100.0000% ( 1) 00:08:58.675 00:08:58.675 ************************************ 00:08:58.675 END TEST nvme_overhead 00:08:58.675 ************************************ 00:08:58.675 00:08:58.675 real 0m1.321s 00:08:58.675 user 0m1.121s 00:08:58.675 sys 0m0.151s 00:08:58.675 15:13:41 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.675 15:13:41 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:58.675 15:13:41 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:58.675 15:13:41 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:58.675 15:13:41 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.675 15:13:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:58.675 ************************************ 00:08:58.675 START TEST nvme_arbitration 00:08:58.675 ************************************ 00:08:58.675 15:13:41 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:01.964 Initializing NVMe Controllers 00:09:01.964 Attached to 0000:00:10.0 00:09:01.964 Attached to 0000:00:11.0 00:09:01.964 Attached to 0000:00:13.0 00:09:01.964 Attached to 0000:00:12.0 00:09:01.964 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:01.964 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:01.964 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:01.964 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:01.964 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:01.964 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:01.964 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:01.964 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:01.964 Initialization complete. Launching workers. 00:09:01.964 Starting thread on core 2 with urgent priority queue 00:09:01.964 Starting thread on core 1 with urgent priority queue 00:09:01.964 Starting thread on core 3 with urgent priority queue 00:09:01.964 Starting thread on core 0 with urgent priority queue 00:09:01.964 QEMU NVMe Ctrl (12340 ) core 0: 448.00 IO/s 223.21 secs/100000 ios 00:09:01.964 QEMU NVMe Ctrl (12342 ) core 0: 448.00 IO/s 223.21 secs/100000 ios 00:09:01.964 QEMU NVMe Ctrl (12341 ) core 1: 405.33 IO/s 246.71 secs/100000 ios 00:09:01.964 QEMU NVMe Ctrl (12342 ) core 1: 405.33 IO/s 246.71 secs/100000 ios 00:09:01.964 QEMU NVMe Ctrl (12343 ) core 2: 597.33 IO/s 167.41 secs/100000 ios 00:09:01.964 QEMU NVMe Ctrl (12342 ) core 3: 682.67 IO/s 146.48 secs/100000 ios 00:09:01.964 ======================================================== 00:09:01.964 00:09:01.964 ************************************ 00:09:01.964 END TEST nvme_arbitration 00:09:01.964 ************************************ 00:09:01.964 00:09:01.964 real 0m3.531s 00:09:01.964 user 0m9.528s 00:09:01.964 sys 0m0.193s 00:09:01.964 15:13:44 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.964 15:13:44 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:02.222 15:13:44 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:02.222 15:13:44 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:02.222 15:13:44 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.222 15:13:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:02.222 ************************************ 00:09:02.222 START TEST nvme_single_aen 00:09:02.222 ************************************ 00:09:02.223 15:13:44 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:02.480 Asynchronous Event Request test 00:09:02.480 Attached to 0000:00:10.0 00:09:02.480 Attached to 0000:00:11.0 00:09:02.480 Attached to 0000:00:13.0 00:09:02.480 Attached to 0000:00:12.0 00:09:02.480 Reset controller to setup AER completions for this process 00:09:02.480 Registering asynchronous event callbacks... 00:09:02.480 Getting orig temperature thresholds of all controllers 00:09:02.480 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:02.480 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:02.480 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:02.480 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:02.480 Setting all controllers temperature threshold low to trigger AER 00:09:02.480 Waiting for all controllers temperature threshold to be set lower 00:09:02.480 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:02.480 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:02.480 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:02.480 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:02.481 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:02.481 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:02.481 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:02.481 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:02.481 Waiting for all controllers to trigger AER and reset threshold 00:09:02.481 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:02.481 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:02.481 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:02.481 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:02.481 Cleaning up... 00:09:02.481 00:09:02.481 real 0m0.328s 00:09:02.481 user 0m0.117s 00:09:02.481 sys 0m0.163s 00:09:02.481 15:13:45 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.481 15:13:45 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:02.481 ************************************ 00:09:02.481 END TEST nvme_single_aen 00:09:02.481 ************************************ 00:09:02.481 15:13:45 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:02.481 15:13:45 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:02.481 15:13:45 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.481 15:13:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:02.481 ************************************ 00:09:02.481 START TEST nvme_doorbell_aers 00:09:02.481 ************************************ 00:09:02.481 15:13:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:09:02.481 15:13:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:02.481 15:13:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:02.481 15:13:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:02.481 15:13:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:02.481 15:13:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1494 -- # bdfs=() 00:09:02.481 15:13:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1494 -- # local bdfs 00:09:02.481 15:13:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:02.481 15:13:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:02.481 15:13:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:09:02.739 15:13:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:09:02.739 15:13:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:02.739 15:13:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:02.739 15:13:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:02.998 [2024-10-25 15:13:45.599123] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:09:12.974 Executing: test_write_invalid_db 00:09:12.974 Waiting for AER completion... 00:09:12.974 Failure: test_write_invalid_db 00:09:12.974 00:09:12.974 Executing: test_invalid_db_write_overflow_sq 00:09:12.974 Waiting for AER completion... 00:09:12.974 Failure: test_invalid_db_write_overflow_sq 00:09:12.974 00:09:12.974 Executing: test_invalid_db_write_overflow_cq 00:09:12.974 Waiting for AER completion... 00:09:12.974 Failure: test_invalid_db_write_overflow_cq 00:09:12.974 00:09:12.974 15:13:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:12.974 15:13:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:12.974 [2024-10-25 15:13:55.646345] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:09:22.966 Executing: test_write_invalid_db 00:09:22.966 Waiting for AER completion... 00:09:22.966 Failure: test_write_invalid_db 00:09:22.966 00:09:22.966 Executing: test_invalid_db_write_overflow_sq 00:09:22.966 Waiting for AER completion... 00:09:22.966 Failure: test_invalid_db_write_overflow_sq 00:09:22.966 00:09:22.966 Executing: test_invalid_db_write_overflow_cq 00:09:22.966 Waiting for AER completion... 00:09:22.966 Failure: test_invalid_db_write_overflow_cq 00:09:22.966 00:09:22.966 15:14:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:22.966 15:14:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:23.227 [2024-10-25 15:14:05.700541] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:09:33.207 Executing: test_write_invalid_db 00:09:33.207 Waiting for AER completion... 00:09:33.207 Failure: test_write_invalid_db 00:09:33.207 00:09:33.207 Executing: test_invalid_db_write_overflow_sq 00:09:33.207 Waiting for AER completion... 00:09:33.207 Failure: test_invalid_db_write_overflow_sq 00:09:33.207 00:09:33.207 Executing: test_invalid_db_write_overflow_cq 00:09:33.207 Waiting for AER completion... 00:09:33.207 Failure: test_invalid_db_write_overflow_cq 00:09:33.207 00:09:33.207 15:14:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:33.207 15:14:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:33.207 [2024-10-25 15:14:15.756442] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:09:43.225 Executing: test_write_invalid_db 00:09:43.225 Waiting for AER completion... 00:09:43.225 Failure: test_write_invalid_db 00:09:43.225 00:09:43.225 Executing: test_invalid_db_write_overflow_sq 00:09:43.225 Waiting for AER completion... 00:09:43.225 Failure: test_invalid_db_write_overflow_sq 00:09:43.225 00:09:43.225 Executing: test_invalid_db_write_overflow_cq 00:09:43.225 Waiting for AER completion... 00:09:43.225 Failure: test_invalid_db_write_overflow_cq 00:09:43.225 00:09:43.225 ************************************ 00:09:43.225 END TEST nvme_doorbell_aers 00:09:43.225 ************************************ 00:09:43.225 00:09:43.225 real 0m40.336s 00:09:43.225 user 0m28.331s 00:09:43.225 sys 0m11.601s 00:09:43.225 15:14:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.225 15:14:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:43.225 15:14:25 nvme -- nvme/nvme.sh@97 -- # uname 00:09:43.225 15:14:25 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:43.225 15:14:25 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:43.225 15:14:25 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:43.225 15:14:25 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.225 15:14:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:43.225 ************************************ 00:09:43.225 START TEST nvme_multi_aen 00:09:43.225 ************************************ 00:09:43.225 15:14:25 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:43.225 [2024-10-25 15:14:25.862260] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:09:43.225 [2024-10-25 15:14:25.862356] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:09:43.225 [2024-10-25 15:14:25.862373] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:09:43.225 [2024-10-25 15:14:25.864757] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:09:43.225 [2024-10-25 15:14:25.864819] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:09:43.225 [2024-10-25 15:14:25.864840] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:09:43.225 [2024-10-25 15:14:25.866738] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:09:43.226 [2024-10-25 15:14:25.866791] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:09:43.226 [2024-10-25 15:14:25.866814] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:09:43.226 [2024-10-25 15:14:25.868433] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:09:43.226 [2024-10-25 15:14:25.868497] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:09:43.226 [2024-10-25 15:14:25.868523] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64483) is not found. Dropping the request. 00:09:43.226 Child process pid: 65004 00:09:43.486 [Child] Asynchronous Event Request test 00:09:43.486 [Child] Attached to 0000:00:10.0 00:09:43.486 [Child] Attached to 0000:00:11.0 00:09:43.486 [Child] Attached to 0000:00:13.0 00:09:43.486 [Child] Attached to 0000:00:12.0 00:09:43.486 [Child] Registering asynchronous event callbacks... 00:09:43.486 [Child] Getting orig temperature thresholds of all controllers 00:09:43.486 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.486 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.486 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.486 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.486 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:43.486 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.486 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.486 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.486 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.486 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.486 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.486 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.486 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.486 [Child] Cleaning up... 00:09:43.746 Asynchronous Event Request test 00:09:43.746 Attached to 0000:00:10.0 00:09:43.746 Attached to 0000:00:11.0 00:09:43.746 Attached to 0000:00:13.0 00:09:43.746 Attached to 0000:00:12.0 00:09:43.746 Reset controller to setup AER completions for this process 00:09:43.746 Registering asynchronous event callbacks... 00:09:43.746 Getting orig temperature thresholds of all controllers 00:09:43.746 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.746 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.746 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.746 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:43.746 Setting all controllers temperature threshold low to trigger AER 00:09:43.746 Waiting for all controllers temperature threshold to be set lower 00:09:43.746 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.746 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:43.746 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.746 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:43.746 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.746 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:43.746 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:43.746 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:43.746 Waiting for all controllers to trigger AER and reset threshold 00:09:43.746 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.746 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.746 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.746 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:43.746 Cleaning up... 00:09:43.746 ************************************ 00:09:43.746 END TEST nvme_multi_aen 00:09:43.746 ************************************ 00:09:43.746 00:09:43.746 real 0m0.662s 00:09:43.746 user 0m0.220s 00:09:43.746 sys 0m0.335s 00:09:43.746 15:14:26 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.746 15:14:26 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:43.746 15:14:26 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:43.746 15:14:26 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:43.746 15:14:26 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.746 15:14:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:43.746 ************************************ 00:09:43.746 START TEST nvme_startup 00:09:43.746 ************************************ 00:09:43.746 15:14:26 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:44.005 Initializing NVMe Controllers 00:09:44.005 Attached to 0000:00:10.0 00:09:44.005 Attached to 0000:00:11.0 00:09:44.005 Attached to 0000:00:13.0 00:09:44.005 Attached to 0000:00:12.0 00:09:44.005 Initialization complete. 00:09:44.005 Time used:199658.609 (us). 00:09:44.005 ************************************ 00:09:44.005 END TEST nvme_startup 00:09:44.005 ************************************ 00:09:44.005 00:09:44.005 real 0m0.311s 00:09:44.005 user 0m0.110s 00:09:44.005 sys 0m0.154s 00:09:44.005 15:14:26 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.005 15:14:26 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:44.005 15:14:26 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:44.005 15:14:26 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:44.005 15:14:26 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.005 15:14:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:44.005 ************************************ 00:09:44.005 START TEST nvme_multi_secondary 00:09:44.005 ************************************ 00:09:44.005 15:14:26 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:09:44.005 15:14:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65060 00:09:44.005 15:14:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:44.005 15:14:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65061 00:09:44.005 15:14:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:44.005 15:14:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:47.320 Initializing NVMe Controllers 00:09:47.320 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:47.320 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:47.320 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:47.320 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:47.320 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:47.320 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:47.320 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:47.320 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:47.320 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:47.320 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:47.320 Initialization complete. Launching workers. 00:09:47.320 ======================================================== 00:09:47.320 Latency(us) 00:09:47.320 Device Information : IOPS MiB/s Average min max 00:09:47.320 PCIE (0000:00:10.0) NSID 1 from core 2: 2943.78 11.50 5433.79 1529.43 13503.15 00:09:47.320 PCIE (0000:00:11.0) NSID 1 from core 2: 2943.78 11.50 5434.67 1407.99 13881.24 00:09:47.320 PCIE (0000:00:13.0) NSID 1 from core 2: 2943.78 11.50 5442.86 1484.84 17195.15 00:09:47.320 PCIE (0000:00:12.0) NSID 1 from core 2: 2943.78 11.50 5442.91 1366.11 17275.81 00:09:47.320 PCIE (0000:00:12.0) NSID 2 from core 2: 2943.78 11.50 5442.42 1319.81 13388.72 00:09:47.320 PCIE (0000:00:12.0) NSID 3 from core 2: 2943.78 11.50 5442.82 1329.21 13491.50 00:09:47.320 ======================================================== 00:09:47.320 Total : 17662.69 68.99 5439.91 1319.81 17275.81 00:09:47.320 00:09:47.578 15:14:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65060 00:09:47.837 Initializing NVMe Controllers 00:09:47.837 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:47.837 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:47.837 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:47.837 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:47.837 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:47.837 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:47.837 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:47.837 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:47.837 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:47.837 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:47.837 Initialization complete. Launching workers. 00:09:47.837 ======================================================== 00:09:47.837 Latency(us) 00:09:47.837 Device Information : IOPS MiB/s Average min max 00:09:47.837 PCIE (0000:00:10.0) NSID 1 from core 1: 4729.79 18.48 3380.28 1353.57 10390.83 00:09:47.837 PCIE (0000:00:11.0) NSID 1 from core 1: 4729.79 18.48 3382.17 1393.23 8961.32 00:09:47.837 PCIE (0000:00:13.0) NSID 1 from core 1: 4729.79 18.48 3382.28 1495.49 9603.12 00:09:47.837 PCIE (0000:00:12.0) NSID 1 from core 1: 4729.79 18.48 3382.27 1574.90 9467.20 00:09:47.837 PCIE (0000:00:12.0) NSID 2 from core 1: 4729.79 18.48 3382.46 1395.50 10536.68 00:09:47.837 PCIE (0000:00:12.0) NSID 3 from core 1: 4729.79 18.48 3382.65 1401.15 12191.01 00:09:47.837 ======================================================== 00:09:47.837 Total : 28378.74 110.85 3382.02 1353.57 12191.01 00:09:47.837 00:09:49.754 Initializing NVMe Controllers 00:09:49.754 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:49.754 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:49.754 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:49.754 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:49.754 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:49.754 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:49.754 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:49.754 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:49.754 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:49.754 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:49.754 Initialization complete. Launching workers. 00:09:49.754 ======================================================== 00:09:49.754 Latency(us) 00:09:49.754 Device Information : IOPS MiB/s Average min max 00:09:49.754 PCIE (0000:00:10.0) NSID 1 from core 0: 7612.52 29.74 2100.20 969.53 10834.52 00:09:49.754 PCIE (0000:00:11.0) NSID 1 from core 0: 7612.52 29.74 2101.32 972.02 11096.70 00:09:49.754 PCIE (0000:00:13.0) NSID 1 from core 0: 7612.52 29.74 2101.28 935.43 11549.98 00:09:49.754 PCIE (0000:00:12.0) NSID 1 from core 0: 7612.52 29.74 2101.25 855.11 11670.77 00:09:49.754 PCIE (0000:00:12.0) NSID 2 from core 0: 7612.52 29.74 2101.22 796.46 11286.55 00:09:49.754 PCIE (0000:00:12.0) NSID 3 from core 0: 7612.52 29.74 2101.18 746.82 10878.59 00:09:49.754 ======================================================== 00:09:49.754 Total : 45675.15 178.42 2101.08 746.82 11670.77 00:09:49.754 00:09:49.754 15:14:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65061 00:09:49.754 15:14:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65130 00:09:49.754 15:14:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:49.754 15:14:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:49.754 15:14:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65131 00:09:49.754 15:14:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:53.036 Initializing NVMe Controllers 00:09:53.036 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:53.036 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:53.036 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:53.036 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:53.036 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:53.036 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:53.036 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:53.036 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:53.036 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:53.036 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:53.036 Initialization complete. Launching workers. 00:09:53.036 ======================================================== 00:09:53.036 Latency(us) 00:09:53.036 Device Information : IOPS MiB/s Average min max 00:09:53.036 PCIE (0000:00:10.0) NSID 1 from core 1: 5365.91 20.96 2979.64 959.31 7030.35 00:09:53.036 PCIE (0000:00:11.0) NSID 1 from core 1: 5365.91 20.96 2981.56 983.22 6890.99 00:09:53.036 PCIE (0000:00:13.0) NSID 1 from core 1: 5365.91 20.96 2981.73 959.39 6707.45 00:09:53.036 PCIE (0000:00:12.0) NSID 1 from core 1: 5365.91 20.96 2982.20 974.92 6726.56 00:09:53.036 PCIE (0000:00:12.0) NSID 2 from core 1: 5365.91 20.96 2982.96 969.92 6604.81 00:09:53.036 PCIE (0000:00:12.0) NSID 3 from core 1: 5371.24 20.98 2980.13 979.54 6578.71 00:09:53.036 ======================================================== 00:09:53.036 Total : 32200.79 125.78 2981.37 959.31 7030.35 00:09:53.036 00:09:53.036 Initializing NVMe Controllers 00:09:53.036 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:53.036 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:53.036 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:53.036 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:53.036 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:53.036 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:53.036 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:53.036 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:53.036 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:53.036 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:53.036 Initialization complete. Launching workers. 00:09:53.036 ======================================================== 00:09:53.036 Latency(us) 00:09:53.036 Device Information : IOPS MiB/s Average min max 00:09:53.036 PCIE (0000:00:10.0) NSID 1 from core 0: 5154.27 20.13 3101.89 1075.86 10240.01 00:09:53.036 PCIE (0000:00:11.0) NSID 1 from core 0: 5154.27 20.13 3103.58 1104.43 9943.63 00:09:53.036 PCIE (0000:00:13.0) NSID 1 from core 0: 5154.27 20.13 3103.57 1078.94 9818.23 00:09:53.036 PCIE (0000:00:12.0) NSID 1 from core 0: 5154.27 20.13 3103.52 1071.98 9800.58 00:09:53.036 PCIE (0000:00:12.0) NSID 2 from core 0: 5154.27 20.13 3103.50 1032.34 9810.21 00:09:53.036 PCIE (0000:00:12.0) NSID 3 from core 0: 5154.27 20.13 3103.50 1024.48 9842.08 00:09:53.036 ======================================================== 00:09:53.036 Total : 30925.61 120.80 3103.26 1024.48 10240.01 00:09:53.036 00:09:55.565 Initializing NVMe Controllers 00:09:55.565 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:55.565 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:55.565 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:55.565 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:55.565 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:55.565 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:55.565 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:55.565 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:55.565 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:55.565 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:55.565 Initialization complete. Launching workers. 00:09:55.565 ======================================================== 00:09:55.565 Latency(us) 00:09:55.565 Device Information : IOPS MiB/s Average min max 00:09:55.565 PCIE (0000:00:10.0) NSID 1 from core 2: 2978.32 11.63 5370.26 1088.50 18629.77 00:09:55.565 PCIE (0000:00:11.0) NSID 1 from core 2: 2978.32 11.63 5371.89 1075.33 19386.93 00:09:55.565 PCIE (0000:00:13.0) NSID 1 from core 2: 2978.32 11.63 5371.54 1061.37 13540.44 00:09:55.565 PCIE (0000:00:12.0) NSID 1 from core 2: 2978.32 11.63 5371.41 1058.70 13929.29 00:09:55.565 PCIE (0000:00:12.0) NSID 2 from core 2: 2978.32 11.63 5371.30 1128.65 14003.01 00:09:55.565 PCIE (0000:00:12.0) NSID 3 from core 2: 2978.32 11.63 5371.45 1124.56 14880.57 00:09:55.565 ======================================================== 00:09:55.565 Total : 17869.89 69.80 5371.31 1058.70 19386.93 00:09:55.565 00:09:55.565 ************************************ 00:09:55.565 END TEST nvme_multi_secondary 00:09:55.565 ************************************ 00:09:55.565 15:14:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65130 00:09:55.565 15:14:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65131 00:09:55.565 00:09:55.565 real 0m11.119s 00:09:55.565 user 0m18.577s 00:09:55.565 sys 0m1.053s 00:09:55.565 15:14:37 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.565 15:14:37 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:55.565 15:14:37 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:55.565 15:14:37 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:55.565 15:14:37 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/64058 ]] 00:09:55.565 15:14:37 nvme -- common/autotest_common.sh@1090 -- # kill 64058 00:09:55.565 15:14:37 nvme -- common/autotest_common.sh@1091 -- # wait 64058 00:09:55.565 [2024-10-25 15:14:37.887751] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65003) is not found. Dropping the request. 00:09:55.565 [2024-10-25 15:14:37.887945] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65003) is not found. Dropping the request. 00:09:55.565 [2024-10-25 15:14:37.888027] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65003) is not found. Dropping the request. 00:09:55.565 [2024-10-25 15:14:37.888082] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65003) is not found. Dropping the request. 00:09:55.565 [2024-10-25 15:14:37.893777] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65003) is not found. Dropping the request. 00:09:55.565 [2024-10-25 15:14:37.893864] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65003) is not found. Dropping the request. 00:09:55.565 [2024-10-25 15:14:37.893899] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65003) is not found. Dropping the request. 00:09:55.565 [2024-10-25 15:14:37.893935] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65003) is not found. Dropping the request. 00:09:55.565 [2024-10-25 15:14:37.898805] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65003) is not found. Dropping the request. 00:09:55.565 [2024-10-25 15:14:37.898894] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65003) is not found. Dropping the request. 00:09:55.565 [2024-10-25 15:14:37.898928] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65003) is not found. Dropping the request. 00:09:55.565 [2024-10-25 15:14:37.898966] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65003) is not found. Dropping the request. 00:09:55.565 [2024-10-25 15:14:37.903173] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65003) is not found. Dropping the request. 00:09:55.565 [2024-10-25 15:14:37.903252] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65003) is not found. Dropping the request. 00:09:55.565 [2024-10-25 15:14:37.903273] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65003) is not found. Dropping the request. 00:09:55.565 [2024-10-25 15:14:37.903298] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65003) is not found. Dropping the request. 00:09:55.565 15:14:38 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:09:55.565 15:14:38 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:09:55.565 15:14:38 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:55.565 15:14:38 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:55.565 15:14:38 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:55.565 15:14:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:55.565 ************************************ 00:09:55.565 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:55.565 ************************************ 00:09:55.565 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:55.565 * Looking for test storage... 00:09:55.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:55.565 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:55.565 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1689 -- # lcov --version 00:09:55.565 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:55.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.825 --rc genhtml_branch_coverage=1 00:09:55.825 --rc genhtml_function_coverage=1 00:09:55.825 --rc genhtml_legend=1 00:09:55.825 --rc geninfo_all_blocks=1 00:09:55.825 --rc geninfo_unexecuted_blocks=1 00:09:55.825 00:09:55.825 ' 00:09:55.825 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:55.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.826 --rc genhtml_branch_coverage=1 00:09:55.826 --rc genhtml_function_coverage=1 00:09:55.826 --rc genhtml_legend=1 00:09:55.826 --rc geninfo_all_blocks=1 00:09:55.826 --rc geninfo_unexecuted_blocks=1 00:09:55.826 00:09:55.826 ' 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:55.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.826 --rc genhtml_branch_coverage=1 00:09:55.826 --rc genhtml_function_coverage=1 00:09:55.826 --rc genhtml_legend=1 00:09:55.826 --rc geninfo_all_blocks=1 00:09:55.826 --rc geninfo_unexecuted_blocks=1 00:09:55.826 00:09:55.826 ' 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:55.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.826 --rc genhtml_branch_coverage=1 00:09:55.826 --rc genhtml_function_coverage=1 00:09:55.826 --rc genhtml_legend=1 00:09:55.826 --rc geninfo_all_blocks=1 00:09:55.826 --rc geninfo_unexecuted_blocks=1 00:09:55.826 00:09:55.826 ' 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1505 -- # bdfs=() 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1505 -- # local bdfs 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1506 -- # bdfs=($(get_nvme_bdfs)) 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1506 -- # get_nvme_bdfs 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1494 -- # bdfs=() 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1494 -- # local bdfs 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # echo 0000:00:10.0 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65297 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65297 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 65297 ']' 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:55.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:55.826 15:14:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:56.084 [2024-10-25 15:14:38.605600] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:09:56.084 [2024-10-25 15:14:38.605749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65297 ] 00:09:56.084 [2024-10-25 15:14:38.810714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:56.343 [2024-10-25 15:14:38.936925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.343 [2024-10-25 15:14:38.937303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.343 [2024-10-25 15:14:38.937165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.343 [2024-10-25 15:14:38.937339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.279 15:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:57.279 15:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:09:57.279 15:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:57.279 15:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.279 15:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:57.279 nvme0n1 00:09:57.279 15:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.279 15:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:57.279 15:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_dfRIj.txt 00:09:57.279 15:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:57.279 15:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.279 15:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:57.279 true 00:09:57.279 15:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.279 15:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:57.279 15:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1729869279 00:09:57.279 15:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:57.279 15:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65320 00:09:57.279 15:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:57.279 15:14:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:59.820 15:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:59.820 15:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.820 15:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:59.820 [2024-10-25 15:14:41.982002] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:59.820 [2024-10-25 15:14:41.982496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:59.820 [2024-10-25 15:14:41.982547] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:59.820 [2024-10-25 15:14:41.982564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.820 [2024-10-25 15:14:41.985023] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:59.821 15:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.821 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65320 00:09:59.821 15:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65320 00:09:59.821 15:14:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65320 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=3 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_dfRIj.txt 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_dfRIj.txt 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65297 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 65297 ']' 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 65297 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65297 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:59.821 killing process with pid 65297 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65297' 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 65297 00:09:59.821 15:14:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 65297 00:10:02.354 15:14:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:02.354 15:14:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:02.354 00:10:02.354 real 0m6.558s 00:10:02.354 user 0m22.776s 00:10:02.354 sys 0m0.838s 00:10:02.354 15:14:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:02.354 15:14:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:02.354 ************************************ 00:10:02.354 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:02.354 ************************************ 00:10:02.354 15:14:44 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:02.354 15:14:44 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:02.354 15:14:44 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:02.354 15:14:44 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:02.354 15:14:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:02.354 ************************************ 00:10:02.354 START TEST nvme_fio 00:10:02.354 ************************************ 00:10:02.354 15:14:44 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:10:02.354 15:14:44 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:02.354 15:14:44 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:02.354 15:14:44 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:02.354 15:14:44 nvme.nvme_fio -- common/autotest_common.sh@1494 -- # bdfs=() 00:10:02.354 15:14:44 nvme.nvme_fio -- common/autotest_common.sh@1494 -- # local bdfs 00:10:02.354 15:14:44 nvme.nvme_fio -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:02.354 15:14:44 nvme.nvme_fio -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:02.354 15:14:44 nvme.nvme_fio -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:10:02.354 15:14:44 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:10:02.354 15:14:44 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:02.354 15:14:44 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:02.354 15:14:44 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:02.354 15:14:44 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:02.354 15:14:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:02.354 15:14:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:02.613 15:14:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:02.613 15:14:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:02.920 15:14:45 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:02.920 15:14:45 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:02.920 15:14:45 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:02.921 15:14:45 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:02.921 15:14:45 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:02.921 15:14:45 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:02.921 15:14:45 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:02.921 15:14:45 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:02.921 15:14:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:02.921 15:14:45 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:02.921 15:14:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:02.921 15:14:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:02.921 15:14:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:02.921 15:14:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:02.921 15:14:45 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:02.921 15:14:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:02.921 15:14:45 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:02.921 15:14:45 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:03.180 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:03.180 fio-3.35 00:10:03.180 Starting 1 thread 00:10:06.466 00:10:06.466 test: (groupid=0, jobs=1): err= 0: pid=65472: Fri Oct 25 15:14:49 2024 00:10:06.466 read: IOPS=22.4k, BW=87.5MiB/s (91.7MB/s)(175MiB/2001msec) 00:10:06.466 slat (usec): min=3, max=103, avg= 4.83, stdev= 1.41 00:10:06.466 clat (usec): min=250, max=10595, avg=2847.93, stdev=482.05 00:10:06.466 lat (usec): min=256, max=10604, avg=2852.76, stdev=482.82 00:10:06.466 clat percentiles (usec): 00:10:06.466 | 1.00th=[ 2278], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2638], 00:10:06.466 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2802], 60.00th=[ 2868], 00:10:06.466 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3130], 00:10:06.466 | 99.00th=[ 4883], 99.50th=[ 6128], 99.90th=[ 9110], 99.95th=[ 9241], 00:10:06.466 | 99.99th=[ 9765] 00:10:06.466 bw ( KiB/s): min=82576, max=96432, per=100.00%, avg=90173.33, stdev=7024.33, samples=3 00:10:06.466 iops : min=20644, max=24108, avg=22543.33, stdev=1756.08, samples=3 00:10:06.466 write: IOPS=22.2k, BW=86.9MiB/s (91.1MB/s)(174MiB/2001msec); 0 zone resets 00:10:06.466 slat (nsec): min=3900, max=99748, avg=5215.67, stdev=1375.29 00:10:06.466 clat (usec): min=275, max=10790, avg=2859.59, stdev=502.70 00:10:06.466 lat (usec): min=281, max=10834, avg=2864.81, stdev=503.39 00:10:06.466 clat percentiles (usec): 00:10:06.466 | 1.00th=[ 2278], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2638], 00:10:06.466 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2802], 60.00th=[ 2868], 00:10:06.466 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3130], 00:10:06.466 | 99.00th=[ 5014], 99.50th=[ 6259], 99.90th=[ 9110], 99.95th=[ 9241], 00:10:06.466 | 99.99th=[10159] 00:10:06.466 bw ( KiB/s): min=84472, max=95536, per=100.00%, avg=90394.67, stdev=5573.23, samples=3 00:10:06.466 iops : min=21118, max=23884, avg=22598.67, stdev=1393.31, samples=3 00:10:06.466 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:06.466 lat (msec) : 2=0.27%, 4=97.98%, 10=1.71%, 20=0.01% 00:10:06.466 cpu : usr=99.30%, sys=0.15%, ctx=3, majf=0, minf=607 00:10:06.466 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:06.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.466 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.466 issued rwts: total=44814,44514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.466 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.466 00:10:06.466 Run status group 0 (all jobs): 00:10:06.466 READ: bw=87.5MiB/s (91.7MB/s), 87.5MiB/s-87.5MiB/s (91.7MB/s-91.7MB/s), io=175MiB (184MB), run=2001-2001msec 00:10:06.467 WRITE: bw=86.9MiB/s (91.1MB/s), 86.9MiB/s-86.9MiB/s (91.1MB/s-91.1MB/s), io=174MiB (182MB), run=2001-2001msec 00:10:06.725 ----------------------------------------------------- 00:10:06.725 Suppressions used: 00:10:06.725 count bytes template 00:10:06.725 1 32 /usr/src/fio/parse.c 00:10:06.725 1 8 libtcmalloc_minimal.so 00:10:06.725 ----------------------------------------------------- 00:10:06.725 00:10:06.725 15:14:49 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:06.725 15:14:49 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:06.725 15:14:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:06.725 15:14:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:07.462 15:14:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:07.462 15:14:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:07.462 15:14:50 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:07.462 15:14:50 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:07.462 15:14:50 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:07.462 15:14:50 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:07.462 15:14:50 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:07.462 15:14:50 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:07.462 15:14:50 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:07.462 15:14:50 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:07.462 15:14:50 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:07.462 15:14:50 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:07.462 15:14:50 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:07.462 15:14:50 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:07.462 15:14:50 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:07.462 15:14:50 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:07.462 15:14:50 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:07.462 15:14:50 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:07.462 15:14:50 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:07.462 15:14:50 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:07.721 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:07.721 fio-3.35 00:10:07.721 Starting 1 thread 00:10:11.908 00:10:11.908 test: (groupid=0, jobs=1): err= 0: pid=65538: Fri Oct 25 15:14:54 2024 00:10:11.908 read: IOPS=21.4k, BW=83.4MiB/s (87.5MB/s)(167MiB/2001msec) 00:10:11.908 slat (nsec): min=4235, max=59932, avg=5354.34, stdev=1183.54 00:10:11.908 clat (usec): min=220, max=11144, avg=2990.45, stdev=348.81 00:10:11.908 lat (usec): min=226, max=11204, avg=2995.81, stdev=349.20 00:10:11.908 clat percentiles (usec): 00:10:11.908 | 1.00th=[ 2704], 5.00th=[ 2802], 10.00th=[ 2835], 20.00th=[ 2868], 00:10:11.908 | 30.00th=[ 2900], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:10:11.908 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3130], 95.00th=[ 3294], 00:10:11.908 | 99.00th=[ 4424], 99.50th=[ 4948], 99.90th=[ 6915], 99.95th=[ 8586], 00:10:11.908 | 99.99th=[10945] 00:10:11.908 bw ( KiB/s): min=84608, max=85152, per=99.38%, avg=84906.67, stdev=275.89, samples=3 00:10:11.908 iops : min=21152, max=21288, avg=21226.67, stdev=68.97, samples=3 00:10:11.908 write: IOPS=21.2k, BW=82.8MiB/s (86.8MB/s)(166MiB/2001msec); 0 zone resets 00:10:11.908 slat (usec): min=4, max=121, avg= 5.57, stdev= 1.32 00:10:11.908 clat (usec): min=288, max=11016, avg=2995.98, stdev=358.39 00:10:11.908 lat (usec): min=293, max=11040, avg=3001.55, stdev=358.78 00:10:11.908 clat percentiles (usec): 00:10:11.908 | 1.00th=[ 2704], 5.00th=[ 2802], 10.00th=[ 2835], 20.00th=[ 2868], 00:10:11.908 | 30.00th=[ 2900], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:10:11.908 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3130], 95.00th=[ 3294], 00:10:11.908 | 99.00th=[ 4490], 99.50th=[ 5080], 99.90th=[ 7242], 99.95th=[ 8979], 00:10:11.908 | 99.99th=[10683] 00:10:11.908 bw ( KiB/s): min=84736, max=85528, per=100.00%, avg=85016.00, stdev=444.05, samples=3 00:10:11.908 iops : min=21184, max=21382, avg=21254.00, stdev=111.01, samples=3 00:10:11.908 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:11.908 lat (msec) : 2=0.16%, 4=97.71%, 10=2.07%, 20=0.02% 00:10:11.908 cpu : usr=99.25%, sys=0.20%, ctx=3, majf=0, minf=606 00:10:11.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:11.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.908 issued rwts: total=42739,42418,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.908 00:10:11.908 Run status group 0 (all jobs): 00:10:11.908 READ: bw=83.4MiB/s (87.5MB/s), 83.4MiB/s-83.4MiB/s (87.5MB/s-87.5MB/s), io=167MiB (175MB), run=2001-2001msec 00:10:11.908 WRITE: bw=82.8MiB/s (86.8MB/s), 82.8MiB/s-82.8MiB/s (86.8MB/s-86.8MB/s), io=166MiB (174MB), run=2001-2001msec 00:10:11.908 ----------------------------------------------------- 00:10:11.908 Suppressions used: 00:10:11.908 count bytes template 00:10:11.908 1 32 /usr/src/fio/parse.c 00:10:11.908 1 8 libtcmalloc_minimal.so 00:10:11.908 ----------------------------------------------------- 00:10:11.908 00:10:11.908 15:14:54 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:11.908 15:14:54 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:11.908 15:14:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:11.908 15:14:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:11.908 15:14:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:11.908 15:14:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:12.166 15:14:54 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:12.166 15:14:54 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:12.167 15:14:54 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:12.167 15:14:54 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:12.167 15:14:54 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:12.167 15:14:54 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:12.167 15:14:54 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:12.167 15:14:54 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:12.167 15:14:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:12.167 15:14:54 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:12.167 15:14:54 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:12.167 15:14:54 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:12.167 15:14:54 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:12.167 15:14:54 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:12.167 15:14:54 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:12.167 15:14:54 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:12.167 15:14:54 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:12.167 15:14:54 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:12.425 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:12.425 fio-3.35 00:10:12.425 Starting 1 thread 00:10:16.605 00:10:16.605 test: (groupid=0, jobs=1): err= 0: pid=65604: Fri Oct 25 15:14:58 2024 00:10:16.605 read: IOPS=21.4k, BW=83.7MiB/s (87.7MB/s)(167MiB/2001msec) 00:10:16.605 slat (nsec): min=3756, max=52409, avg=4759.50, stdev=1784.54 00:10:16.605 clat (usec): min=249, max=10967, avg=2981.17, stdev=405.85 00:10:16.605 lat (usec): min=253, max=11019, avg=2985.93, stdev=406.31 00:10:16.606 clat percentiles (usec): 00:10:16.606 | 1.00th=[ 2040], 5.00th=[ 2606], 10.00th=[ 2671], 20.00th=[ 2769], 00:10:16.606 | 30.00th=[ 2802], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:10:16.606 | 70.00th=[ 3032], 80.00th=[ 3228], 90.00th=[ 3425], 95.00th=[ 3556], 00:10:16.606 | 99.00th=[ 4228], 99.50th=[ 4621], 99.90th=[ 6063], 99.95th=[ 8717], 00:10:16.606 | 99.99th=[10814] 00:10:16.606 bw ( KiB/s): min=75344, max=90120, per=98.31%, avg=84224.33, stdev=7827.11, samples=3 00:10:16.606 iops : min=18836, max=22530, avg=21056.00, stdev=1956.73, samples=3 00:10:16.606 write: IOPS=21.3k, BW=83.0MiB/s (87.1MB/s)(166MiB/2001msec); 0 zone resets 00:10:16.606 slat (nsec): min=3937, max=34049, avg=5280.62, stdev=1860.29 00:10:16.606 clat (usec): min=229, max=10829, avg=2990.94, stdev=411.84 00:10:16.606 lat (usec): min=235, max=10861, avg=2996.22, stdev=412.28 00:10:16.606 clat percentiles (usec): 00:10:16.606 | 1.00th=[ 2024], 5.00th=[ 2606], 10.00th=[ 2704], 20.00th=[ 2769], 00:10:16.606 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2966], 00:10:16.606 | 70.00th=[ 3032], 80.00th=[ 3261], 90.00th=[ 3425], 95.00th=[ 3556], 00:10:16.606 | 99.00th=[ 4228], 99.50th=[ 4621], 99.90th=[ 6849], 99.95th=[ 8979], 00:10:16.606 | 99.99th=[10552] 00:10:16.606 bw ( KiB/s): min=75912, max=90232, per=99.22%, avg=84347.33, stdev=7493.00, samples=3 00:10:16.606 iops : min=18978, max=22558, avg=21086.67, stdev=1873.16, samples=3 00:10:16.606 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:10:16.606 lat (msec) : 2=0.88%, 4=97.58%, 10=1.46%, 20=0.03% 00:10:16.606 cpu : usr=99.05%, sys=0.35%, ctx=5, majf=0, minf=606 00:10:16.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:16.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.606 issued rwts: total=42859,42528,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.606 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.606 00:10:16.606 Run status group 0 (all jobs): 00:10:16.606 READ: bw=83.7MiB/s (87.7MB/s), 83.7MiB/s-83.7MiB/s (87.7MB/s-87.7MB/s), io=167MiB (176MB), run=2001-2001msec 00:10:16.606 WRITE: bw=83.0MiB/s (87.1MB/s), 83.0MiB/s-83.0MiB/s (87.1MB/s-87.1MB/s), io=166MiB (174MB), run=2001-2001msec 00:10:16.606 ----------------------------------------------------- 00:10:16.606 Suppressions used: 00:10:16.606 count bytes template 00:10:16.606 1 32 /usr/src/fio/parse.c 00:10:16.606 1 8 libtcmalloc_minimal.so 00:10:16.606 ----------------------------------------------------- 00:10:16.606 00:10:16.606 15:14:58 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:16.606 15:14:58 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:16.606 15:14:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:16.606 15:14:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:16.606 15:14:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:16.606 15:14:59 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:16.864 15:14:59 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:16.864 15:14:59 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:16.864 15:14:59 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:16.864 15:14:59 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:16.864 15:14:59 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:16.864 15:14:59 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:16.864 15:14:59 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:16.864 15:14:59 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:16.864 15:14:59 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:16.864 15:14:59 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:16.864 15:14:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:16.864 15:14:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:16.864 15:14:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:16.864 15:14:59 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:16.864 15:14:59 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:16.864 15:14:59 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:16.864 15:14:59 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:16.864 15:14:59 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:17.121 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:17.121 fio-3.35 00:10:17.121 Starting 1 thread 00:10:22.457 00:10:22.457 test: (groupid=0, jobs=1): err= 0: pid=65666: Fri Oct 25 15:15:04 2024 00:10:22.457 read: IOPS=22.1k, BW=86.3MiB/s (90.5MB/s)(173MiB/2001msec) 00:10:22.457 slat (nsec): min=3726, max=73314, avg=4516.29, stdev=1788.59 00:10:22.457 clat (usec): min=231, max=10388, avg=2887.95, stdev=393.92 00:10:22.457 lat (usec): min=235, max=10442, avg=2892.47, stdev=394.22 00:10:22.457 clat percentiles (usec): 00:10:22.457 | 1.00th=[ 1876], 5.00th=[ 2507], 10.00th=[ 2671], 20.00th=[ 2737], 00:10:22.457 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:10:22.457 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3097], 95.00th=[ 3228], 00:10:22.457 | 99.00th=[ 4359], 99.50th=[ 5080], 99.90th=[ 7111], 99.95th=[ 8356], 00:10:22.457 | 99.99th=[10290] 00:10:22.457 bw ( KiB/s): min=86096, max=90032, per=99.93%, avg=88282.67, stdev=2004.11, samples=3 00:10:22.457 iops : min=21524, max=22508, avg=22070.67, stdev=501.03, samples=3 00:10:22.457 write: IOPS=21.9k, BW=85.7MiB/s (89.9MB/s)(171MiB/2001msec); 0 zone resets 00:10:22.457 slat (nsec): min=3903, max=61701, avg=5044.10, stdev=1811.60 00:10:22.457 clat (usec): min=195, max=10302, avg=2900.15, stdev=398.07 00:10:22.457 lat (usec): min=199, max=10321, avg=2905.19, stdev=398.37 00:10:22.457 clat percentiles (usec): 00:10:22.457 | 1.00th=[ 1893], 5.00th=[ 2540], 10.00th=[ 2671], 20.00th=[ 2769], 00:10:22.457 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2933], 00:10:22.457 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3097], 95.00th=[ 3228], 00:10:22.457 | 99.00th=[ 4490], 99.50th=[ 5080], 99.90th=[ 7373], 99.95th=[ 8848], 00:10:22.457 | 99.99th=[10159] 00:10:22.457 bw ( KiB/s): min=85728, max=89928, per=100.00%, avg=88418.67, stdev=2335.95, samples=3 00:10:22.457 iops : min=21432, max=22482, avg=22104.67, stdev=583.99, samples=3 00:10:22.457 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:10:22.457 lat (msec) : 2=1.30%, 4=97.12%, 10=1.52%, 20=0.02% 00:10:22.457 cpu : usr=99.35%, sys=0.10%, ctx=6, majf=0, minf=605 00:10:22.457 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:22.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:22.457 issued rwts: total=44196,43902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.457 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:22.457 00:10:22.457 Run status group 0 (all jobs): 00:10:22.457 READ: bw=86.3MiB/s (90.5MB/s), 86.3MiB/s-86.3MiB/s (90.5MB/s-90.5MB/s), io=173MiB (181MB), run=2001-2001msec 00:10:22.457 WRITE: bw=85.7MiB/s (89.9MB/s), 85.7MiB/s-85.7MiB/s (89.9MB/s-89.9MB/s), io=171MiB (180MB), run=2001-2001msec 00:10:22.457 ----------------------------------------------------- 00:10:22.457 Suppressions used: 00:10:22.457 count bytes template 00:10:22.457 1 32 /usr/src/fio/parse.c 00:10:22.457 1 8 libtcmalloc_minimal.so 00:10:22.457 ----------------------------------------------------- 00:10:22.457 00:10:22.457 15:15:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:22.457 15:15:04 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:22.457 ************************************ 00:10:22.457 END TEST nvme_fio 00:10:22.457 ************************************ 00:10:22.457 00:10:22.457 real 0m19.765s 00:10:22.458 user 0m15.140s 00:10:22.458 sys 0m4.520s 00:10:22.458 15:15:04 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.458 15:15:04 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:22.458 00:10:22.458 real 1m35.714s 00:10:22.458 user 3m45.152s 00:10:22.458 sys 0m24.165s 00:10:22.458 15:15:04 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.458 15:15:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:22.458 ************************************ 00:10:22.458 END TEST nvme 00:10:22.458 ************************************ 00:10:22.458 15:15:04 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:22.458 15:15:04 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:22.458 15:15:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:22.458 15:15:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.458 15:15:04 -- common/autotest_common.sh@10 -- # set +x 00:10:22.458 ************************************ 00:10:22.458 START TEST nvme_scc 00:10:22.458 ************************************ 00:10:22.458 15:15:04 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:22.458 * Looking for test storage... 00:10:22.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:22.458 15:15:04 nvme_scc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:22.458 15:15:04 nvme_scc -- common/autotest_common.sh@1689 -- # lcov --version 00:10:22.458 15:15:04 nvme_scc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:22.458 15:15:04 nvme_scc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:22.458 15:15:04 nvme_scc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.458 15:15:04 nvme_scc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:22.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.458 --rc genhtml_branch_coverage=1 00:10:22.458 --rc genhtml_function_coverage=1 00:10:22.458 --rc genhtml_legend=1 00:10:22.458 --rc geninfo_all_blocks=1 00:10:22.458 --rc geninfo_unexecuted_blocks=1 00:10:22.458 00:10:22.458 ' 00:10:22.458 15:15:04 nvme_scc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:22.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.458 --rc genhtml_branch_coverage=1 00:10:22.458 --rc genhtml_function_coverage=1 00:10:22.458 --rc genhtml_legend=1 00:10:22.458 --rc geninfo_all_blocks=1 00:10:22.458 --rc geninfo_unexecuted_blocks=1 00:10:22.458 00:10:22.458 ' 00:10:22.458 15:15:04 nvme_scc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:22.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.458 --rc genhtml_branch_coverage=1 00:10:22.458 --rc genhtml_function_coverage=1 00:10:22.458 --rc genhtml_legend=1 00:10:22.458 --rc geninfo_all_blocks=1 00:10:22.458 --rc geninfo_unexecuted_blocks=1 00:10:22.458 00:10:22.458 ' 00:10:22.458 15:15:04 nvme_scc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:22.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.458 --rc genhtml_branch_coverage=1 00:10:22.458 --rc genhtml_function_coverage=1 00:10:22.458 --rc genhtml_legend=1 00:10:22.458 --rc geninfo_all_blocks=1 00:10:22.458 --rc geninfo_unexecuted_blocks=1 00:10:22.458 00:10:22.458 ' 00:10:22.458 15:15:04 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:22.458 15:15:04 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:22.458 15:15:04 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:22.458 15:15:04 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:22.458 15:15:04 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.458 15:15:04 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.458 15:15:04 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.458 15:15:04 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.458 15:15:04 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.458 15:15:04 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:22.458 15:15:04 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.458 15:15:04 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:22.458 15:15:04 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:22.458 15:15:04 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:22.458 15:15:04 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:22.458 15:15:04 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:22.458 15:15:04 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:22.458 15:15:04 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:22.458 15:15:04 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:22.458 15:15:04 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:22.458 15:15:04 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:22.458 15:15:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:22.458 15:15:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:22.458 15:15:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:22.458 15:15:04 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:22.717 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:23.286 Waiting for block devices as requested 00:10:23.286 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:23.286 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:23.545 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:23.545 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:28.830 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:28.830 15:15:11 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:28.830 15:15:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:28.830 15:15:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:28.830 15:15:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:28.830 15:15:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.830 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:28.831 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.832 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:28.833 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.834 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:28.835 15:15:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:28.835 15:15:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:28.835 15:15:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:28.835 15:15:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.835 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:28.836 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.837 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:28.838 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.839 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:28.840 15:15:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:28.840 15:15:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:28.840 15:15:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:28.840 15:15:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.840 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:28.841 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.842 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.843 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.844 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:28.845 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.110 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:29.111 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:29.112 15:15:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:29.112 15:15:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:29.112 15:15:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:29.112 15:15:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.112 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.113 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.114 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:29.115 15:15:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:29.115 15:15:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:29.116 15:15:11 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:29.116 15:15:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:29.116 15:15:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:29.116 15:15:11 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:29.685 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:30.261 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.526 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.526 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.526 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.785 15:15:13 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:30.785 15:15:13 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:30.785 15:15:13 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.785 15:15:13 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:30.785 ************************************ 00:10:30.785 START TEST nvme_simple_copy 00:10:30.785 ************************************ 00:10:30.785 15:15:13 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:31.045 Initializing NVMe Controllers 00:10:31.045 Attaching to 0000:00:10.0 00:10:31.045 Controller supports SCC. Attached to 0000:00:10.0 00:10:31.045 Namespace ID: 1 size: 6GB 00:10:31.045 Initialization complete. 00:10:31.045 00:10:31.045 Controller QEMU NVMe Ctrl (12340 ) 00:10:31.045 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:31.045 Namespace Block Size:4096 00:10:31.045 Writing LBAs 0 to 63 with Random Data 00:10:31.045 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:31.045 LBAs matching Written Data: 64 00:10:31.045 00:10:31.045 real 0m0.371s 00:10:31.045 user 0m0.147s 00:10:31.045 sys 0m0.122s 00:10:31.045 15:15:13 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:31.045 15:15:13 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:31.045 ************************************ 00:10:31.045 END TEST nvme_simple_copy 00:10:31.045 ************************************ 00:10:31.045 00:10:31.045 real 0m9.069s 00:10:31.045 user 0m1.558s 00:10:31.045 sys 0m2.568s 00:10:31.045 15:15:13 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:31.045 15:15:13 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:31.045 ************************************ 00:10:31.045 END TEST nvme_scc 00:10:31.045 ************************************ 00:10:31.045 15:15:13 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:31.046 15:15:13 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:31.046 15:15:13 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:31.046 15:15:13 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:31.046 15:15:13 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:31.046 15:15:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:31.046 15:15:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:31.046 15:15:13 -- common/autotest_common.sh@10 -- # set +x 00:10:31.046 ************************************ 00:10:31.046 START TEST nvme_fdp 00:10:31.046 ************************************ 00:10:31.046 15:15:13 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:10:31.304 * Looking for test storage... 00:10:31.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:31.305 15:15:13 nvme_fdp -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:31.305 15:15:13 nvme_fdp -- common/autotest_common.sh@1689 -- # lcov --version 00:10:31.305 15:15:13 nvme_fdp -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:31.305 15:15:13 nvme_fdp -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.305 15:15:13 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:31.305 15:15:13 nvme_fdp -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.305 15:15:13 nvme_fdp -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:31.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.305 --rc genhtml_branch_coverage=1 00:10:31.305 --rc genhtml_function_coverage=1 00:10:31.305 --rc genhtml_legend=1 00:10:31.305 --rc geninfo_all_blocks=1 00:10:31.305 --rc geninfo_unexecuted_blocks=1 00:10:31.305 00:10:31.305 ' 00:10:31.305 15:15:13 nvme_fdp -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:31.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.305 --rc genhtml_branch_coverage=1 00:10:31.305 --rc genhtml_function_coverage=1 00:10:31.305 --rc genhtml_legend=1 00:10:31.305 --rc geninfo_all_blocks=1 00:10:31.305 --rc geninfo_unexecuted_blocks=1 00:10:31.305 00:10:31.305 ' 00:10:31.305 15:15:13 nvme_fdp -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:31.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.305 --rc genhtml_branch_coverage=1 00:10:31.305 --rc genhtml_function_coverage=1 00:10:31.305 --rc genhtml_legend=1 00:10:31.305 --rc geninfo_all_blocks=1 00:10:31.305 --rc geninfo_unexecuted_blocks=1 00:10:31.305 00:10:31.305 ' 00:10:31.305 15:15:13 nvme_fdp -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:31.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.305 --rc genhtml_branch_coverage=1 00:10:31.305 --rc genhtml_function_coverage=1 00:10:31.305 --rc genhtml_legend=1 00:10:31.305 --rc geninfo_all_blocks=1 00:10:31.305 --rc geninfo_unexecuted_blocks=1 00:10:31.305 00:10:31.305 ' 00:10:31.305 15:15:13 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:31.305 15:15:13 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:31.305 15:15:14 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:31.305 15:15:14 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:31.305 15:15:14 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:31.305 15:15:14 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:31.305 15:15:14 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.305 15:15:14 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.305 15:15:14 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.305 15:15:14 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.305 15:15:14 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.305 15:15:14 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.305 15:15:14 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:31.305 15:15:14 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.305 15:15:14 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:31.305 15:15:14 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:31.305 15:15:14 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:31.305 15:15:14 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:31.305 15:15:14 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:31.305 15:15:14 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:31.305 15:15:14 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:31.305 15:15:14 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:31.305 15:15:14 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:31.305 15:15:14 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:31.305 15:15:14 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:31.874 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:32.134 Waiting for block devices as requested 00:10:32.393 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:32.393 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:32.393 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:32.652 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:37.935 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:37.935 15:15:20 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:37.935 15:15:20 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:37.935 15:15:20 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:37.935 15:15:20 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:37.935 15:15:20 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:37.935 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:37.936 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.937 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.938 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:37.939 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:37.940 15:15:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:37.941 15:15:20 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:37.941 15:15:20 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:37.941 15:15:20 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:37.941 15:15:20 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:37.941 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.942 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.943 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.944 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.945 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:37.946 15:15:20 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:37.946 15:15:20 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:37.946 15:15:20 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:37.946 15:15:20 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:37.946 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:37.947 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.948 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:37.949 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:37.950 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.951 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.952 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:37.953 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:37.954 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:37.955 15:15:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:38.217 15:15:20 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:38.217 15:15:20 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:38.217 15:15:20 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:38.217 15:15:20 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:38.217 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.218 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:38.219 15:15:20 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:38.219 15:15:20 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:38.219 15:15:20 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:38.219 15:15:20 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:38.219 15:15:20 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:38.788 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:39.726 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:39.726 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:39.726 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:39.726 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:39.726 15:15:22 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:39.726 15:15:22 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:39.726 15:15:22 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.726 15:15:22 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:39.726 ************************************ 00:10:39.726 START TEST nvme_flexible_data_placement 00:10:39.726 ************************************ 00:10:39.726 15:15:22 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:39.985 Initializing NVMe Controllers 00:10:39.985 Attaching to 0000:00:13.0 00:10:39.985 Controller supports FDP Attached to 0000:00:13.0 00:10:39.985 Namespace ID: 1 Endurance Group ID: 1 00:10:39.985 Initialization complete. 00:10:39.985 00:10:39.985 ================================== 00:10:39.985 == FDP tests for Namespace: #01 == 00:10:39.985 ================================== 00:10:39.985 00:10:39.985 Get Feature: FDP: 00:10:39.985 ================= 00:10:39.985 Enabled: Yes 00:10:39.985 FDP configuration Index: 0 00:10:39.985 00:10:39.985 FDP configurations log page 00:10:39.985 =========================== 00:10:39.985 Number of FDP configurations: 1 00:10:39.985 Version: 0 00:10:39.985 Size: 112 00:10:39.985 FDP Configuration Descriptor: 0 00:10:39.985 Descriptor Size: 96 00:10:39.985 Reclaim Group Identifier format: 2 00:10:39.985 FDP Volatile Write Cache: Not Present 00:10:39.985 FDP Configuration: Valid 00:10:39.985 Vendor Specific Size: 0 00:10:39.985 Number of Reclaim Groups: 2 00:10:39.985 Number of Recalim Unit Handles: 8 00:10:39.985 Max Placement Identifiers: 128 00:10:39.985 Number of Namespaces Suppprted: 256 00:10:39.985 Reclaim unit Nominal Size: 6000000 bytes 00:10:39.985 Estimated Reclaim Unit Time Limit: Not Reported 00:10:39.985 RUH Desc #000: RUH Type: Initially Isolated 00:10:39.985 RUH Desc #001: RUH Type: Initially Isolated 00:10:39.985 RUH Desc #002: RUH Type: Initially Isolated 00:10:39.985 RUH Desc #003: RUH Type: Initially Isolated 00:10:39.985 RUH Desc #004: RUH Type: Initially Isolated 00:10:39.985 RUH Desc #005: RUH Type: Initially Isolated 00:10:39.985 RUH Desc #006: RUH Type: Initially Isolated 00:10:39.985 RUH Desc #007: RUH Type: Initially Isolated 00:10:39.985 00:10:39.985 FDP reclaim unit handle usage log page 00:10:39.985 ====================================== 00:10:39.985 Number of Reclaim Unit Handles: 8 00:10:39.985 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:39.985 RUH Usage Desc #001: RUH Attributes: Unused 00:10:39.985 RUH Usage Desc #002: RUH Attributes: Unused 00:10:39.985 RUH Usage Desc #003: RUH Attributes: Unused 00:10:39.985 RUH Usage Desc #004: RUH Attributes: Unused 00:10:39.985 RUH Usage Desc #005: RUH Attributes: Unused 00:10:39.985 RUH Usage Desc #006: RUH Attributes: Unused 00:10:39.985 RUH Usage Desc #007: RUH Attributes: Unused 00:10:39.985 00:10:39.985 FDP statistics log page 00:10:39.985 ======================= 00:10:39.985 Host bytes with metadata written: 987926528 00:10:39.985 Media bytes with metadata written: 990724096 00:10:39.985 Media bytes erased: 0 00:10:39.985 00:10:39.985 FDP Reclaim unit handle status 00:10:39.985 ============================== 00:10:39.985 Number of RUHS descriptors: 2 00:10:39.985 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000011d7 00:10:39.985 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:39.985 00:10:39.985 FDP write on placement id: 0 success 00:10:39.985 00:10:39.985 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:39.985 00:10:39.985 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:39.985 00:10:39.985 Get Feature: FDP Events for Placement handle: #0 00:10:39.985 ======================== 00:10:39.985 Number of FDP Events: 6 00:10:39.985 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:39.985 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:39.985 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:39.985 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:39.985 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:39.985 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:39.985 00:10:39.985 FDP events log page 00:10:39.985 =================== 00:10:39.985 Number of FDP events: 1 00:10:39.985 FDP Event #0: 00:10:39.985 Event Type: RU Not Written to Capacity 00:10:39.985 Placement Identifier: Valid 00:10:39.985 NSID: Valid 00:10:39.985 Location: Valid 00:10:39.985 Placement Identifier: 0 00:10:39.985 Event Timestamp: 8 00:10:39.985 Namespace Identifier: 1 00:10:39.986 Reclaim Group Identifier: 0 00:10:39.986 Reclaim Unit Handle Identifier: 0 00:10:39.986 00:10:39.986 FDP test passed 00:10:40.244 ************************************ 00:10:40.244 END TEST nvme_flexible_data_placement 00:10:40.244 ************************************ 00:10:40.244 00:10:40.244 real 0m0.311s 00:10:40.244 user 0m0.098s 00:10:40.244 sys 0m0.111s 00:10:40.244 15:15:22 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:40.244 15:15:22 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:40.244 ************************************ 00:10:40.244 END TEST nvme_fdp 00:10:40.244 ************************************ 00:10:40.244 00:10:40.244 real 0m9.011s 00:10:40.244 user 0m1.568s 00:10:40.244 sys 0m2.516s 00:10:40.244 15:15:22 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:40.244 15:15:22 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:40.244 15:15:22 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:40.244 15:15:22 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:40.244 15:15:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:40.244 15:15:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:40.244 15:15:22 -- common/autotest_common.sh@10 -- # set +x 00:10:40.244 ************************************ 00:10:40.244 START TEST nvme_rpc 00:10:40.244 ************************************ 00:10:40.244 15:15:22 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:40.504 * Looking for test storage... 00:10:40.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:40.504 15:15:22 nvme_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:40.504 15:15:22 nvme_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:10:40.504 15:15:22 nvme_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.504 15:15:23 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:40.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.504 --rc genhtml_branch_coverage=1 00:10:40.504 --rc genhtml_function_coverage=1 00:10:40.504 --rc genhtml_legend=1 00:10:40.504 --rc geninfo_all_blocks=1 00:10:40.504 --rc geninfo_unexecuted_blocks=1 00:10:40.504 00:10:40.504 ' 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:40.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.504 --rc genhtml_branch_coverage=1 00:10:40.504 --rc genhtml_function_coverage=1 00:10:40.504 --rc genhtml_legend=1 00:10:40.504 --rc geninfo_all_blocks=1 00:10:40.504 --rc geninfo_unexecuted_blocks=1 00:10:40.504 00:10:40.504 ' 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:40.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.504 --rc genhtml_branch_coverage=1 00:10:40.504 --rc genhtml_function_coverage=1 00:10:40.504 --rc genhtml_legend=1 00:10:40.504 --rc geninfo_all_blocks=1 00:10:40.504 --rc geninfo_unexecuted_blocks=1 00:10:40.504 00:10:40.504 ' 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:40.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.504 --rc genhtml_branch_coverage=1 00:10:40.504 --rc genhtml_function_coverage=1 00:10:40.504 --rc genhtml_legend=1 00:10:40.504 --rc geninfo_all_blocks=1 00:10:40.504 --rc geninfo_unexecuted_blocks=1 00:10:40.504 00:10:40.504 ' 00:10:40.504 15:15:23 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:40.504 15:15:23 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@1505 -- # bdfs=() 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@1505 -- # local bdfs 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@1506 -- # bdfs=($(get_nvme_bdfs)) 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@1506 -- # get_nvme_bdfs 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@1494 -- # bdfs=() 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@1494 -- # local bdfs 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@1495 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@1496 -- # (( 4 == 0 )) 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@1508 -- # echo 0000:00:10.0 00:10:40.504 15:15:23 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:40.504 15:15:23 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67054 00:10:40.504 15:15:23 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:40.504 15:15:23 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:40.504 15:15:23 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67054 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 67054 ']' 00:10:40.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.504 15:15:23 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.763 [2024-10-25 15:15:23.328294] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:10:40.763 [2024-10-25 15:15:23.328424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67054 ] 00:10:41.022 [2024-10-25 15:15:23.514477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:41.022 [2024-10-25 15:15:23.638026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.022 [2024-10-25 15:15:23.638057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.959 15:15:24 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.959 15:15:24 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:41.959 15:15:24 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:42.217 Nvme0n1 00:10:42.217 15:15:24 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:42.217 15:15:24 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:42.476 request: 00:10:42.476 { 00:10:42.476 "bdev_name": "Nvme0n1", 00:10:42.476 "filename": "non_existing_file", 00:10:42.476 "method": "bdev_nvme_apply_firmware", 00:10:42.476 "req_id": 1 00:10:42.476 } 00:10:42.476 Got JSON-RPC error response 00:10:42.476 response: 00:10:42.476 { 00:10:42.476 "code": -32603, 00:10:42.476 "message": "open file failed." 00:10:42.476 } 00:10:42.476 15:15:25 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:42.476 15:15:25 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:42.476 15:15:25 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:42.736 15:15:25 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:42.736 15:15:25 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67054 00:10:42.736 15:15:25 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 67054 ']' 00:10:42.736 15:15:25 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 67054 00:10:42.736 15:15:25 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:10:42.736 15:15:25 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:42.736 15:15:25 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67054 00:10:42.736 killing process with pid 67054 00:10:42.736 15:15:25 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:42.736 15:15:25 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:42.736 15:15:25 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67054' 00:10:42.736 15:15:25 nvme_rpc -- common/autotest_common.sh@969 -- # kill 67054 00:10:42.736 15:15:25 nvme_rpc -- common/autotest_common.sh@974 -- # wait 67054 00:10:45.269 ************************************ 00:10:45.269 END TEST nvme_rpc 00:10:45.269 ************************************ 00:10:45.269 00:10:45.269 real 0m4.771s 00:10:45.269 user 0m8.740s 00:10:45.269 sys 0m0.850s 00:10:45.269 15:15:27 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:45.269 15:15:27 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.269 15:15:27 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:45.269 15:15:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:45.269 15:15:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:45.269 15:15:27 -- common/autotest_common.sh@10 -- # set +x 00:10:45.269 ************************************ 00:10:45.269 START TEST nvme_rpc_timeouts 00:10:45.269 ************************************ 00:10:45.269 15:15:27 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:45.269 * Looking for test storage... 00:10:45.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:45.269 15:15:27 nvme_rpc_timeouts -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:45.269 15:15:27 nvme_rpc_timeouts -- common/autotest_common.sh@1689 -- # lcov --version 00:10:45.269 15:15:27 nvme_rpc_timeouts -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:45.269 15:15:27 nvme_rpc_timeouts -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:45.269 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.269 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.269 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.269 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.269 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.269 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.269 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.269 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.269 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.269 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.269 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.269 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:45.270 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:45.270 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.270 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.270 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:45.270 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:45.270 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.270 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:45.270 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.270 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:45.270 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:45.270 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.270 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:45.270 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.270 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.270 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.270 15:15:27 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:45.270 15:15:27 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.270 15:15:27 nvme_rpc_timeouts -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:45.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.270 --rc genhtml_branch_coverage=1 00:10:45.270 --rc genhtml_function_coverage=1 00:10:45.270 --rc genhtml_legend=1 00:10:45.270 --rc geninfo_all_blocks=1 00:10:45.270 --rc geninfo_unexecuted_blocks=1 00:10:45.270 00:10:45.270 ' 00:10:45.270 15:15:27 nvme_rpc_timeouts -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:45.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.270 --rc genhtml_branch_coverage=1 00:10:45.270 --rc genhtml_function_coverage=1 00:10:45.270 --rc genhtml_legend=1 00:10:45.270 --rc geninfo_all_blocks=1 00:10:45.270 --rc geninfo_unexecuted_blocks=1 00:10:45.270 00:10:45.270 ' 00:10:45.270 15:15:27 nvme_rpc_timeouts -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:45.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.270 --rc genhtml_branch_coverage=1 00:10:45.270 --rc genhtml_function_coverage=1 00:10:45.270 --rc genhtml_legend=1 00:10:45.270 --rc geninfo_all_blocks=1 00:10:45.270 --rc geninfo_unexecuted_blocks=1 00:10:45.270 00:10:45.270 ' 00:10:45.270 15:15:27 nvme_rpc_timeouts -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:45.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.270 --rc genhtml_branch_coverage=1 00:10:45.270 --rc genhtml_function_coverage=1 00:10:45.270 --rc genhtml_legend=1 00:10:45.270 --rc geninfo_all_blocks=1 00:10:45.270 --rc geninfo_unexecuted_blocks=1 00:10:45.270 00:10:45.270 ' 00:10:45.270 15:15:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.270 15:15:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67131 00:10:45.270 15:15:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67131 00:10:45.270 15:15:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67163 00:10:45.270 15:15:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:45.270 15:15:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:45.270 15:15:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67163 00:10:45.270 15:15:27 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 67163 ']' 00:10:45.270 15:15:27 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.270 15:15:27 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:45.270 15:15:27 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.270 15:15:27 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:45.270 15:15:27 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:45.529 [2024-10-25 15:15:28.046750] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:10:45.529 [2024-10-25 15:15:28.047038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67163 ] 00:10:45.529 [2024-10-25 15:15:28.231409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:45.787 [2024-10-25 15:15:28.356203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.787 [2024-10-25 15:15:28.356274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.724 Checking default timeout settings: 00:10:46.724 15:15:29 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:46.724 15:15:29 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:10:46.724 15:15:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:46.724 15:15:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:46.982 Making settings changes with rpc: 00:10:46.982 15:15:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:46.982 15:15:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:47.241 Check default vs. modified settings: 00:10:47.241 15:15:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:47.241 15:15:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67131 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67131 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:47.500 Setting action_on_timeout is changed as expected. 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67131 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67131 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:47.500 Setting timeout_us is changed as expected. 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67131 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67131 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:47.500 Setting timeout_admin_us is changed as expected. 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67131 /tmp/settings_modified_67131 00:10:47.500 15:15:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67163 00:10:47.500 15:15:30 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 67163 ']' 00:10:47.500 15:15:30 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 67163 00:10:47.500 15:15:30 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:10:47.500 15:15:30 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:47.500 15:15:30 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67163 00:10:47.759 killing process with pid 67163 00:10:47.759 15:15:30 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:47.759 15:15:30 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:47.759 15:15:30 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67163' 00:10:47.759 15:15:30 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 67163 00:10:47.759 15:15:30 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 67163 00:10:50.291 RPC TIMEOUT SETTING TEST PASSED. 00:10:50.291 15:15:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:50.291 00:10:50.291 real 0m4.982s 00:10:50.291 user 0m9.356s 00:10:50.291 sys 0m0.815s 00:10:50.291 ************************************ 00:10:50.291 END TEST nvme_rpc_timeouts 00:10:50.291 ************************************ 00:10:50.291 15:15:32 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.291 15:15:32 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:50.291 15:15:32 -- spdk/autotest.sh@239 -- # uname -s 00:10:50.291 15:15:32 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:50.291 15:15:32 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:50.291 15:15:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:50.291 15:15:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.291 15:15:32 -- common/autotest_common.sh@10 -- # set +x 00:10:50.291 ************************************ 00:10:50.291 START TEST sw_hotplug 00:10:50.291 ************************************ 00:10:50.291 15:15:32 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:50.291 * Looking for test storage... 00:10:50.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:50.291 15:15:32 sw_hotplug -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:50.291 15:15:32 sw_hotplug -- common/autotest_common.sh@1689 -- # lcov --version 00:10:50.291 15:15:32 sw_hotplug -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:50.291 15:15:32 sw_hotplug -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.291 15:15:32 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:50.291 15:15:33 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:50.291 15:15:33 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.291 15:15:33 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:50.291 15:15:33 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.291 15:15:33 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.291 15:15:33 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.291 15:15:33 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:50.291 15:15:33 sw_hotplug -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.291 15:15:33 sw_hotplug -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:50.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.291 --rc genhtml_branch_coverage=1 00:10:50.291 --rc genhtml_function_coverage=1 00:10:50.291 --rc genhtml_legend=1 00:10:50.291 --rc geninfo_all_blocks=1 00:10:50.291 --rc geninfo_unexecuted_blocks=1 00:10:50.291 00:10:50.291 ' 00:10:50.291 15:15:33 sw_hotplug -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:50.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.291 --rc genhtml_branch_coverage=1 00:10:50.291 --rc genhtml_function_coverage=1 00:10:50.291 --rc genhtml_legend=1 00:10:50.291 --rc geninfo_all_blocks=1 00:10:50.291 --rc geninfo_unexecuted_blocks=1 00:10:50.291 00:10:50.291 ' 00:10:50.291 15:15:33 sw_hotplug -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:50.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.291 --rc genhtml_branch_coverage=1 00:10:50.291 --rc genhtml_function_coverage=1 00:10:50.291 --rc genhtml_legend=1 00:10:50.291 --rc geninfo_all_blocks=1 00:10:50.291 --rc geninfo_unexecuted_blocks=1 00:10:50.291 00:10:50.291 ' 00:10:50.291 15:15:33 sw_hotplug -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:50.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.291 --rc genhtml_branch_coverage=1 00:10:50.291 --rc genhtml_function_coverage=1 00:10:50.291 --rc genhtml_legend=1 00:10:50.291 --rc geninfo_all_blocks=1 00:10:50.291 --rc geninfo_unexecuted_blocks=1 00:10:50.291 00:10:50.291 ' 00:10:50.291 15:15:33 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:50.860 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:51.146 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:51.146 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:51.146 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:51.146 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:51.405 15:15:33 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:51.405 15:15:33 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:51.405 15:15:33 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:51.405 15:15:33 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:51.405 15:15:33 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:51.405 15:15:33 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:51.405 15:15:33 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:51.405 15:15:33 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:51.405 15:15:33 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:51.405 15:15:33 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:51.405 15:15:33 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:51.405 15:15:33 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:51.405 15:15:33 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:51.405 15:15:33 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:51.405 15:15:33 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:51.405 15:15:33 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:51.405 15:15:33 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:51.405 15:15:33 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:51.405 15:15:33 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:51.405 15:15:33 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:51.405 15:15:33 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:51.405 15:15:33 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:51.405 15:15:33 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:51.405 15:15:33 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:51.406 15:15:33 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:51.406 15:15:33 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:51.406 15:15:33 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:51.406 15:15:33 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:51.972 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:52.231 Waiting for block devices as requested 00:10:52.231 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:52.490 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:52.490 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:52.490 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:57.765 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:57.765 15:15:40 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:57.765 15:15:40 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:58.330 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:58.330 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:58.330 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:58.897 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:59.154 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:59.154 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:59.154 15:15:41 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:59.154 15:15:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:59.413 15:15:41 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:59.413 15:15:41 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:59.413 15:15:41 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68059 00:10:59.413 15:15:41 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:59.413 15:15:41 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:59.413 15:15:41 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:59.413 15:15:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:59.413 15:15:41 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:59.413 15:15:41 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:59.413 15:15:41 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:59.413 15:15:41 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:59.413 15:15:41 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:59.413 15:15:41 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:59.413 15:15:41 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:59.413 15:15:41 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:59.413 15:15:41 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:59.413 15:15:41 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:59.672 Initializing NVMe Controllers 00:10:59.672 Attaching to 0000:00:10.0 00:10:59.672 Attaching to 0000:00:11.0 00:10:59.672 Attached to 0000:00:11.0 00:10:59.672 Attached to 0000:00:10.0 00:10:59.672 Initialization complete. Starting I/O... 00:10:59.672 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:59.672 QEMU NVMe Ctrl (12340 ): 3 I/Os completed (+3) 00:10:59.672 00:11:00.609 QEMU NVMe Ctrl (12341 ): 1496 I/Os completed (+1496) 00:11:00.609 QEMU NVMe Ctrl (12340 ): 1499 I/Os completed (+1496) 00:11:00.609 00:11:01.545 QEMU NVMe Ctrl (12341 ): 3604 I/Os completed (+2108) 00:11:01.545 QEMU NVMe Ctrl (12340 ): 3610 I/Os completed (+2111) 00:11:01.545 00:11:02.919 QEMU NVMe Ctrl (12341 ): 5784 I/Os completed (+2180) 00:11:02.919 QEMU NVMe Ctrl (12340 ): 5790 I/Os completed (+2180) 00:11:02.919 00:11:03.852 QEMU NVMe Ctrl (12341 ): 7940 I/Os completed (+2156) 00:11:03.852 QEMU NVMe Ctrl (12340 ): 7946 I/Os completed (+2156) 00:11:03.852 00:11:04.787 QEMU NVMe Ctrl (12341 ): 10072 I/Os completed (+2132) 00:11:04.787 QEMU NVMe Ctrl (12340 ): 10078 I/Os completed (+2132) 00:11:04.787 00:11:05.421 15:15:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:05.421 15:15:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:05.421 15:15:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:05.421 [2024-10-25 15:15:48.006420] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:05.421 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:05.421 [2024-10-25 15:15:48.008624] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.421 [2024-10-25 15:15:48.008713] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.421 [2024-10-25 15:15:48.008810] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.421 [2024-10-25 15:15:48.008863] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.421 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:05.421 [2024-10-25 15:15:48.011692] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.421 [2024-10-25 15:15:48.011841] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.421 [2024-10-25 15:15:48.011893] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.421 [2024-10-25 15:15:48.011983] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.421 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:11:05.421 EAL: Scan for (pci) bus failed. 00:11:05.421 15:15:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:05.421 15:15:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:05.421 [2024-10-25 15:15:48.045988] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:05.421 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:05.421 [2024-10-25 15:15:48.047813] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.421 [2024-10-25 15:15:48.047971] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.421 [2024-10-25 15:15:48.048031] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.421 [2024-10-25 15:15:48.048146] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.421 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:05.421 [2024-10-25 15:15:48.050877] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.421 [2024-10-25 15:15:48.051010] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.421 [2024-10-25 15:15:48.051040] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.421 [2024-10-25 15:15:48.051058] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.421 15:15:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:05.421 15:15:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:05.421 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:05.421 EAL: Scan for (pci) bus failed. 00:11:05.680 15:15:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:05.680 15:15:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:05.680 15:15:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:05.680 00:11:05.680 15:15:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:05.680 15:15:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:05.680 15:15:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:05.680 15:15:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:05.680 15:15:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:05.680 Attaching to 0000:00:10.0 00:11:05.680 Attached to 0000:00:10.0 00:11:05.680 15:15:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:05.680 15:15:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:05.680 15:15:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:05.680 Attaching to 0000:00:11.0 00:11:05.939 Attached to 0000:00:11.0 00:11:06.506 QEMU NVMe Ctrl (12340 ): 1936 I/Os completed (+1936) 00:11:06.506 QEMU NVMe Ctrl (12341 ): 1701 I/Os completed (+1701) 00:11:06.506 00:11:07.882 QEMU NVMe Ctrl (12340 ): 3932 I/Os completed (+1996) 00:11:07.882 QEMU NVMe Ctrl (12341 ): 3698 I/Os completed (+1997) 00:11:07.882 00:11:08.820 QEMU NVMe Ctrl (12340 ): 5720 I/Os completed (+1788) 00:11:08.820 QEMU NVMe Ctrl (12341 ): 5493 I/Os completed (+1795) 00:11:08.820 00:11:09.758 QEMU NVMe Ctrl (12340 ): 7612 I/Os completed (+1892) 00:11:09.758 QEMU NVMe Ctrl (12341 ): 7392 I/Os completed (+1899) 00:11:09.758 00:11:10.706 QEMU NVMe Ctrl (12340 ): 9648 I/Os completed (+2036) 00:11:10.706 QEMU NVMe Ctrl (12341 ): 9430 I/Os completed (+2038) 00:11:10.706 00:11:11.641 QEMU NVMe Ctrl (12340 ): 11684 I/Os completed (+2036) 00:11:11.641 QEMU NVMe Ctrl (12341 ): 11466 I/Os completed (+2036) 00:11:11.641 00:11:12.577 QEMU NVMe Ctrl (12340 ): 13728 I/Os completed (+2044) 00:11:12.577 QEMU NVMe Ctrl (12341 ): 13510 I/Os completed (+2044) 00:11:12.577 00:11:13.513 QEMU NVMe Ctrl (12340 ): 15708 I/Os completed (+1980) 00:11:13.513 QEMU NVMe Ctrl (12341 ): 15490 I/Os completed (+1980) 00:11:13.513 00:11:14.886 QEMU NVMe Ctrl (12340 ): 17332 I/Os completed (+1624) 00:11:14.886 QEMU NVMe Ctrl (12341 ): 17119 I/Os completed (+1629) 00:11:14.886 00:11:15.821 QEMU NVMe Ctrl (12340 ): 19032 I/Os completed (+1700) 00:11:15.821 QEMU NVMe Ctrl (12341 ): 18821 I/Os completed (+1702) 00:11:15.821 00:11:16.762 QEMU NVMe Ctrl (12340 ): 20728 I/Os completed (+1696) 00:11:16.762 QEMU NVMe Ctrl (12341 ): 20523 I/Os completed (+1702) 00:11:16.762 00:11:17.699 QEMU NVMe Ctrl (12340 ): 22588 I/Os completed (+1860) 00:11:17.699 QEMU NVMe Ctrl (12341 ): 22390 I/Os completed (+1867) 00:11:17.699 00:11:17.699 15:16:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:17.699 15:16:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:17.699 15:16:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:17.699 15:16:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:17.699 [2024-10-25 15:16:00.411284] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:17.699 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:17.699 [2024-10-25 15:16:00.413244] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.699 [2024-10-25 15:16:00.413410] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.699 [2024-10-25 15:16:00.413466] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.699 [2024-10-25 15:16:00.413677] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.699 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:17.699 [2024-10-25 15:16:00.417133] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.699 [2024-10-25 15:16:00.417295] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.699 [2024-10-25 15:16:00.417353] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.699 [2024-10-25 15:16:00.417445] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.959 15:16:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:17.959 15:16:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:17.959 [2024-10-25 15:16:00.450359] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:17.959 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:17.959 [2024-10-25 15:16:00.452070] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.959 [2024-10-25 15:16:00.452119] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.959 [2024-10-25 15:16:00.452148] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.959 [2024-10-25 15:16:00.452169] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.959 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:17.959 [2024-10-25 15:16:00.454996] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.959 [2024-10-25 15:16:00.455045] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.959 [2024-10-25 15:16:00.455068] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.959 [2024-10-25 15:16:00.455089] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.959 15:16:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:17.959 15:16:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:17.959 15:16:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:17.959 15:16:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:17.959 15:16:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:17.959 15:16:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:18.223 15:16:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:18.223 15:16:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:18.223 15:16:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:18.223 15:16:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:18.223 Attaching to 0000:00:10.0 00:11:18.223 Attached to 0000:00:10.0 00:11:18.223 15:16:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:18.223 15:16:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:18.223 15:16:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:18.223 Attaching to 0000:00:11.0 00:11:18.223 Attached to 0000:00:11.0 00:11:18.486 QEMU NVMe Ctrl (12340 ): 924 I/Os completed (+924) 00:11:18.486 QEMU NVMe Ctrl (12341 ): 684 I/Os completed (+684) 00:11:18.486 00:11:19.863 QEMU NVMe Ctrl (12340 ): 2756 I/Os completed (+1832) 00:11:19.863 QEMU NVMe Ctrl (12341 ): 2520 I/Os completed (+1836) 00:11:19.863 00:11:20.822 QEMU NVMe Ctrl (12340 ): 4631 I/Os completed (+1875) 00:11:20.822 QEMU NVMe Ctrl (12341 ): 4399 I/Os completed (+1879) 00:11:20.822 00:11:21.758 QEMU NVMe Ctrl (12340 ): 6525 I/Os completed (+1894) 00:11:21.758 QEMU NVMe Ctrl (12341 ): 6300 I/Os completed (+1901) 00:11:21.758 00:11:22.696 QEMU NVMe Ctrl (12340 ): 8457 I/Os completed (+1932) 00:11:22.696 QEMU NVMe Ctrl (12341 ): 8235 I/Os completed (+1935) 00:11:22.696 00:11:23.632 QEMU NVMe Ctrl (12340 ): 10417 I/Os completed (+1960) 00:11:23.632 QEMU NVMe Ctrl (12341 ): 10199 I/Os completed (+1964) 00:11:23.632 00:11:24.569 QEMU NVMe Ctrl (12340 ): 12201 I/Os completed (+1784) 00:11:24.569 QEMU NVMe Ctrl (12341 ): 12007 I/Os completed (+1808) 00:11:24.569 00:11:25.504 QEMU NVMe Ctrl (12340 ): 14124 I/Os completed (+1923) 00:11:25.504 QEMU NVMe Ctrl (12341 ): 13952 I/Os completed (+1945) 00:11:25.504 00:11:26.879 QEMU NVMe Ctrl (12340 ): 15812 I/Os completed (+1688) 00:11:26.879 QEMU NVMe Ctrl (12341 ): 15658 I/Os completed (+1706) 00:11:26.879 00:11:27.817 QEMU NVMe Ctrl (12340 ): 17548 I/Os completed (+1736) 00:11:27.817 QEMU NVMe Ctrl (12341 ): 17394 I/Os completed (+1736) 00:11:27.817 00:11:28.752 QEMU NVMe Ctrl (12340 ): 19308 I/Os completed (+1760) 00:11:28.752 QEMU NVMe Ctrl (12341 ): 19161 I/Os completed (+1767) 00:11:28.752 00:11:29.688 QEMU NVMe Ctrl (12340 ): 21160 I/Os completed (+1852) 00:11:29.688 QEMU NVMe Ctrl (12341 ): 21013 I/Os completed (+1852) 00:11:29.688 00:11:30.255 15:16:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:30.255 15:16:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:30.255 15:16:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:30.255 15:16:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:30.255 [2024-10-25 15:16:12.824271] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:30.255 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:30.255 [2024-10-25 15:16:12.828427] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.255 [2024-10-25 15:16:12.828488] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.255 [2024-10-25 15:16:12.828511] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.255 [2024-10-25 15:16:12.828538] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.255 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:30.255 [2024-10-25 15:16:12.831719] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.255 [2024-10-25 15:16:12.831768] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.256 [2024-10-25 15:16:12.831787] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.256 [2024-10-25 15:16:12.831808] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.256 15:16:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:30.256 15:16:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:30.256 [2024-10-25 15:16:12.867162] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:30.256 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:30.256 [2024-10-25 15:16:12.869152] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.256 [2024-10-25 15:16:12.869224] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.256 [2024-10-25 15:16:12.869254] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.256 [2024-10-25 15:16:12.869279] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.256 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:30.256 [2024-10-25 15:16:12.872254] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.256 [2024-10-25 15:16:12.872296] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.256 [2024-10-25 15:16:12.872325] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.256 [2024-10-25 15:16:12.872346] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.256 15:16:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:30.256 15:16:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:30.256 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:30.256 EAL: Scan for (pci) bus failed. 00:11:30.515 15:16:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:30.515 15:16:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:30.515 15:16:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:30.515 15:16:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:30.515 15:16:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:30.515 15:16:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:30.515 15:16:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:30.515 15:16:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:30.515 Attaching to 0000:00:10.0 00:11:30.515 Attached to 0000:00:10.0 00:11:30.515 QEMU NVMe Ctrl (12340 ): 144 I/Os completed (+144) 00:11:30.515 00:11:30.515 15:16:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:30.515 15:16:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:30.515 15:16:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:30.515 Attaching to 0000:00:11.0 00:11:30.515 Attached to 0000:00:11.0 00:11:30.515 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:30.515 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:30.515 [2024-10-25 15:16:13.220034] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:42.745 15:16:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:42.745 15:16:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:42.745 15:16:25 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.21 00:11:42.745 15:16:25 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.21 00:11:42.745 15:16:25 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:42.745 15:16:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.21 00:11:42.745 15:16:25 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.21 2 00:11:42.745 remove_attach_helper took 43.21s to complete (handling 2 nvme drive(s)) 15:16:25 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:49.370 15:16:31 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68059 00:11:49.370 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68059) - No such process 00:11:49.370 15:16:31 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68059 00:11:49.370 15:16:31 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:49.371 15:16:31 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:49.371 15:16:31 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:49.371 15:16:31 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68600 00:11:49.371 15:16:31 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:49.371 15:16:31 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:49.371 15:16:31 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68600 00:11:49.371 15:16:31 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 68600 ']' 00:11:49.371 15:16:31 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.371 15:16:31 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:49.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.371 15:16:31 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.371 15:16:31 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:49.371 15:16:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.371 [2024-10-25 15:16:31.335872] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:11:49.371 [2024-10-25 15:16:31.336004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68600 ] 00:11:49.371 [2024-10-25 15:16:31.516888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.371 [2024-10-25 15:16:31.642631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.988 15:16:32 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:49.988 15:16:32 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:11:49.988 15:16:32 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:49.988 15:16:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.988 15:16:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.988 15:16:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.988 15:16:32 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:49.988 15:16:32 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:49.988 15:16:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:49.988 15:16:32 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:49.988 15:16:32 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:49.988 15:16:32 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:49.988 15:16:32 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:49.988 15:16:32 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:49.988 15:16:32 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:49.988 15:16:32 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:49.988 15:16:32 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:49.988 15:16:32 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:49.988 15:16:32 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:56.564 15:16:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:56.564 15:16:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:56.564 15:16:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:56.564 15:16:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:56.564 15:16:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:56.564 15:16:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:56.564 15:16:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:56.564 [2024-10-25 15:16:38.653580] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:56.564 15:16:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:56.564 [2024-10-25 15:16:38.656257] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.564 [2024-10-25 15:16:38.656293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.564 [2024-10-25 15:16:38.656311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.565 [2024-10-25 15:16:38.656339] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.565 [2024-10-25 15:16:38.656352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.565 [2024-10-25 15:16:38.656369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.565 [2024-10-25 15:16:38.656383] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.565 [2024-10-25 15:16:38.656398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.565 [2024-10-25 15:16:38.656410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.565 [2024-10-25 15:16:38.656429] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.565 [2024-10-25 15:16:38.656441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.565 [2024-10-25 15:16:38.656455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.565 15:16:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.565 15:16:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.565 15:16:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.565 15:16:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.565 15:16:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.565 15:16:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.565 15:16:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:56.565 15:16:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:56.565 [2024-10-25 15:16:39.052933] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:56.565 [2024-10-25 15:16:39.055657] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.565 [2024-10-25 15:16:39.055703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.565 [2024-10-25 15:16:39.055726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.565 [2024-10-25 15:16:39.055750] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.565 [2024-10-25 15:16:39.055776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.565 [2024-10-25 15:16:39.055789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.565 [2024-10-25 15:16:39.055805] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.565 [2024-10-25 15:16:39.055817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.565 [2024-10-25 15:16:39.055831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.565 [2024-10-25 15:16:39.055844] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.565 [2024-10-25 15:16:39.055858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.565 [2024-10-25 15:16:39.055870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.565 15:16:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:56.565 15:16:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:56.565 15:16:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:56.565 15:16:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.565 15:16:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.565 15:16:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.565 15:16:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.565 15:16:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.565 15:16:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.565 15:16:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:56.565 15:16:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:56.824 15:16:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:56.824 15:16:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:56.824 15:16:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:56.824 15:16:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:56.824 15:16:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:56.824 15:16:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:56.824 15:16:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:56.824 15:16:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:57.083 15:16:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:57.083 15:16:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:57.083 15:16:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:09.298 15:16:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:09.298 15:16:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:09.298 15:16:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:09.298 15:16:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:09.298 15:16:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:09.298 15:16:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:09.298 15:16:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.298 15:16:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.298 15:16:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.298 15:16:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:09.298 15:16:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:09.298 15:16:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:09.298 15:16:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:09.298 15:16:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:09.298 15:16:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:09.298 15:16:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:09.298 15:16:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:09.298 15:16:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:09.298 [2024-10-25 15:16:51.732587] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:09.298 15:16:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:09.298 [2024-10-25 15:16:51.735342] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.298 [2024-10-25 15:16:51.735385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.298 [2024-10-25 15:16:51.735403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.298 [2024-10-25 15:16:51.735431] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.298 [2024-10-25 15:16:51.735444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.298 [2024-10-25 15:16:51.735459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.298 [2024-10-25 15:16:51.735473] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.298 [2024-10-25 15:16:51.735488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.298 [2024-10-25 15:16:51.735501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.298 [2024-10-25 15:16:51.735517] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.298 [2024-10-25 15:16:51.735528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.298 [2024-10-25 15:16:51.735543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.298 15:16:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:09.298 15:16:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:09.298 15:16:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.298 15:16:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.298 15:16:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.298 15:16:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:09.298 15:16:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:09.558 [2024-10-25 15:16:52.131908] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:09.558 [2024-10-25 15:16:52.134671] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.558 [2024-10-25 15:16:52.134726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.558 [2024-10-25 15:16:52.134750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.558 [2024-10-25 15:16:52.134774] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.558 [2024-10-25 15:16:52.134790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.558 [2024-10-25 15:16:52.134803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.558 [2024-10-25 15:16:52.134819] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.558 [2024-10-25 15:16:52.134832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.558 [2024-10-25 15:16:52.134847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.558 [2024-10-25 15:16:52.134860] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.558 [2024-10-25 15:16:52.134874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.558 [2024-10-25 15:16:52.134886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.816 15:16:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:09.817 15:16:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:09.817 15:16:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:09.817 15:16:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:09.817 15:16:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:09.817 15:16:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:09.817 15:16:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.817 15:16:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.817 15:16:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.817 15:16:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:09.817 15:16:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:09.817 15:16:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:09.817 15:16:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:09.817 15:16:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:09.817 15:16:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:10.075 15:16:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:10.075 15:16:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:10.075 15:16:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:10.075 15:16:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:10.075 15:16:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:10.075 15:16:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:10.075 15:16:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:22.290 15:17:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:22.290 15:17:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:22.290 15:17:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:22.290 15:17:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:22.290 15:17:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:22.290 15:17:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:22.290 15:17:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.290 15:17:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:22.290 15:17:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.290 15:17:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:22.290 15:17:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:22.290 15:17:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:22.290 15:17:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:22.290 15:17:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:22.290 15:17:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:22.290 15:17:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:22.290 15:17:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:22.290 15:17:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:22.290 15:17:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:22.290 15:17:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:22.290 15:17:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:22.290 15:17:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.290 15:17:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:22.290 [2024-10-25 15:17:04.811550] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:22.290 [2024-10-25 15:17:04.814159] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.290 [2024-10-25 15:17:04.814218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.290 [2024-10-25 15:17:04.814237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.290 [2024-10-25 15:17:04.814266] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.290 [2024-10-25 15:17:04.814279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.290 [2024-10-25 15:17:04.814302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.290 [2024-10-25 15:17:04.814317] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.290 [2024-10-25 15:17:04.814333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.290 [2024-10-25 15:17:04.814346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.290 [2024-10-25 15:17:04.814365] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.290 [2024-10-25 15:17:04.814377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.290 [2024-10-25 15:17:04.814392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.290 15:17:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.290 15:17:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:22.290 15:17:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:22.548 [2024-10-25 15:17:05.210970] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:22.548 [2024-10-25 15:17:05.213693] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.548 [2024-10-25 15:17:05.213734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.548 [2024-10-25 15:17:05.213757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-10-25 15:17:05.213781] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.548 [2024-10-25 15:17:05.213799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.548 [2024-10-25 15:17:05.213811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-10-25 15:17:05.213830] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.548 [2024-10-25 15:17:05.213842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.548 [2024-10-25 15:17:05.213863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.548 [2024-10-25 15:17:05.213876] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.548 [2024-10-25 15:17:05.213893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.548 [2024-10-25 15:17:05.213905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.806 15:17:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:22.806 15:17:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:22.806 15:17:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:22.806 15:17:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:22.806 15:17:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:22.806 15:17:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:22.806 15:17:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.806 15:17:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:22.806 15:17:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.806 15:17:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:22.806 15:17:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:22.806 15:17:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:22.806 15:17:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:22.806 15:17:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:23.065 15:17:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:23.065 15:17:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:23.065 15:17:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:23.065 15:17:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:23.065 15:17:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:23.065 15:17:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:23.065 15:17:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:23.065 15:17:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:35.266 15:17:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:35.266 15:17:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:35.266 15:17:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:35.266 15:17:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:35.266 15:17:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:35.266 15:17:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:35.266 15:17:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.266 15:17:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:35.266 15:17:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.266 15:17:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:35.266 15:17:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:35.266 15:17:17 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.23 00:12:35.266 15:17:17 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.23 00:12:35.266 15:17:17 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:35.266 15:17:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.23 00:12:35.266 15:17:17 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.23 2 00:12:35.266 remove_attach_helper took 45.23s to complete (handling 2 nvme drive(s)) 15:17:17 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:35.266 15:17:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.266 15:17:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:35.266 15:17:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.266 15:17:17 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:35.266 15:17:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.266 15:17:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:35.266 15:17:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.266 15:17:17 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:35.266 15:17:17 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:35.266 15:17:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:35.266 15:17:17 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:35.266 15:17:17 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:35.266 15:17:17 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:35.266 15:17:17 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:35.266 15:17:17 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:35.266 15:17:17 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:35.266 15:17:17 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:35.266 15:17:17 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:35.266 15:17:17 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:35.266 15:17:17 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:41.839 15:17:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:41.839 15:17:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:41.839 15:17:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:41.839 15:17:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:41.839 15:17:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:41.839 15:17:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:41.839 15:17:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:41.839 15:17:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:41.839 15:17:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:41.839 15:17:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:41.839 15:17:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:41.839 15:17:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.839 15:17:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:41.839 [2024-10-25 15:17:23.921052] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:41.839 [2024-10-25 15:17:23.923550] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.839 [2024-10-25 15:17:23.923605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.839 [2024-10-25 15:17:23.923625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.839 [2024-10-25 15:17:23.923655] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.839 [2024-10-25 15:17:23.923669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.839 [2024-10-25 15:17:23.923685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.839 [2024-10-25 15:17:23.923700] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.839 [2024-10-25 15:17:23.923716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.839 [2024-10-25 15:17:23.923729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.839 [2024-10-25 15:17:23.923756] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.839 [2024-10-25 15:17:23.923768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.839 [2024-10-25 15:17:23.923786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.839 15:17:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.839 15:17:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:41.839 15:17:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:41.839 [2024-10-25 15:17:24.320431] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:41.839 [2024-10-25 15:17:24.323042] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.839 [2024-10-25 15:17:24.323091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.839 [2024-10-25 15:17:24.323112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.839 [2024-10-25 15:17:24.323135] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.840 [2024-10-25 15:17:24.323151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.840 [2024-10-25 15:17:24.323164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.840 [2024-10-25 15:17:24.323194] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.840 [2024-10-25 15:17:24.323207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.840 [2024-10-25 15:17:24.323222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.840 [2024-10-25 15:17:24.323237] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.840 [2024-10-25 15:17:24.323251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.840 [2024-10-25 15:17:24.323264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.840 15:17:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:41.840 15:17:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:41.840 15:17:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:41.840 15:17:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:41.840 15:17:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:41.840 15:17:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:41.840 15:17:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.840 15:17:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:41.840 15:17:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.840 15:17:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:41.840 15:17:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:42.099 15:17:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:42.099 15:17:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:42.099 15:17:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:42.099 15:17:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:42.099 15:17:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:42.099 15:17:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:42.099 15:17:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:42.099 15:17:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:42.358 15:17:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:42.358 15:17:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:42.358 15:17:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:54.566 15:17:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:54.566 15:17:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:54.566 15:17:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:54.566 15:17:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:54.566 15:17:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:54.566 15:17:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.566 15:17:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:54.566 15:17:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:54.566 15:17:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.566 15:17:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:54.566 15:17:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:54.566 15:17:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:54.566 15:17:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:54.566 15:17:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:54.566 15:17:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:54.566 15:17:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:54.566 15:17:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:54.566 [2024-10-25 15:17:37.000037] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:54.566 [2024-10-25 15:17:37.002730] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.566 [2024-10-25 15:17:37.002780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.566 [2024-10-25 15:17:37.002799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.566 [2024-10-25 15:17:37.002826] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.566 [2024-10-25 15:17:37.002840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.566 [2024-10-25 15:17:37.002860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.566 [2024-10-25 15:17:37.002874] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.566 [2024-10-25 15:17:37.002889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.566 [2024-10-25 15:17:37.002902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.566 [2024-10-25 15:17:37.002918] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.566 [2024-10-25 15:17:37.002931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.566 [2024-10-25 15:17:37.002946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.566 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:54.566 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:54.566 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:54.566 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:54.566 15:17:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.566 15:17:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:54.566 15:17:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.567 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:54.567 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:54.825 [2024-10-25 15:17:37.399415] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:54.825 [2024-10-25 15:17:37.401370] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.825 [2024-10-25 15:17:37.401412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.825 [2024-10-25 15:17:37.401433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.825 [2024-10-25 15:17:37.401455] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.825 [2024-10-25 15:17:37.401477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.825 [2024-10-25 15:17:37.401489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.825 [2024-10-25 15:17:37.401505] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.825 [2024-10-25 15:17:37.401517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.825 [2024-10-25 15:17:37.401549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.825 [2024-10-25 15:17:37.401564] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.825 [2024-10-25 15:17:37.401578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.825 [2024-10-25 15:17:37.401591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.825 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:54.825 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:55.084 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:55.084 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:55.084 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:55.084 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:55.084 15:17:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.084 15:17:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:55.084 15:17:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.084 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:55.084 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:55.084 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:55.084 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:55.084 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:55.084 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:55.395 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:55.395 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:55.395 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:55.395 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:55.395 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:55.395 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:55.395 15:17:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:07.604 15:17:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:07.604 15:17:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:07.604 15:17:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:07.604 15:17:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:07.604 15:17:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:07.604 15:17:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:07.604 15:17:49 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.604 15:17:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:07.604 15:17:49 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.604 15:17:49 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:07.604 15:17:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:07.604 15:17:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:07.604 15:17:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:07.604 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:07.604 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:07.604 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:07.604 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:07.604 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:07.604 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:07.604 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:07.604 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:07.604 15:17:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.604 15:17:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:07.604 [2024-10-25 15:17:50.079470] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:07.604 [2024-10-25 15:17:50.082145] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.604 [2024-10-25 15:17:50.082203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.604 [2024-10-25 15:17:50.082222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.604 [2024-10-25 15:17:50.082249] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.604 [2024-10-25 15:17:50.082262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.604 [2024-10-25 15:17:50.082277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.604 [2024-10-25 15:17:50.082291] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.604 [2024-10-25 15:17:50.082313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.604 [2024-10-25 15:17:50.082325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.604 [2024-10-25 15:17:50.082341] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.604 [2024-10-25 15:17:50.082353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.604 [2024-10-25 15:17:50.082368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.604 15:17:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.604 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:07.604 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:07.863 [2024-10-25 15:17:50.478847] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:07.863 [2024-10-25 15:17:50.480873] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.863 [2024-10-25 15:17:50.480919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.863 [2024-10-25 15:17:50.480941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.863 [2024-10-25 15:17:50.480965] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.863 [2024-10-25 15:17:50.480980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.863 [2024-10-25 15:17:50.480994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.863 [2024-10-25 15:17:50.481011] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.863 [2024-10-25 15:17:50.481023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.863 [2024-10-25 15:17:50.481040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.863 [2024-10-25 15:17:50.481054] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.863 [2024-10-25 15:17:50.481072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.863 [2024-10-25 15:17:50.481095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.121 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:08.121 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:08.121 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:08.121 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:08.121 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:08.121 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:08.121 15:17:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.121 15:17:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:08.121 15:17:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.121 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:08.121 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:08.121 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:08.121 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:08.121 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:08.378 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:08.378 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:08.378 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:08.378 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:08.378 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:08.378 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:08.378 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:08.378 15:17:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:20.646 15:18:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:20.646 15:18:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:20.646 15:18:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:20.646 15:18:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:20.646 15:18:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:20.646 15:18:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:20.646 15:18:02 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.646 15:18:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:20.646 15:18:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.646 15:18:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:20.646 15:18:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:20.646 15:18:03 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.20 00:13:20.646 15:18:03 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.20 00:13:20.646 15:18:03 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:13:20.646 15:18:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.20 00:13:20.646 15:18:03 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.20 2 00:13:20.646 remove_attach_helper took 45.20s to complete (handling 2 nvme drive(s)) 15:18:03 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:20.646 15:18:03 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68600 00:13:20.646 15:18:03 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 68600 ']' 00:13:20.646 15:18:03 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 68600 00:13:20.646 15:18:03 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:13:20.646 15:18:03 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:20.646 15:18:03 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68600 00:13:20.646 killing process with pid 68600 00:13:20.646 15:18:03 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:20.646 15:18:03 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:20.646 15:18:03 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68600' 00:13:20.646 15:18:03 sw_hotplug -- common/autotest_common.sh@969 -- # kill 68600 00:13:20.646 15:18:03 sw_hotplug -- common/autotest_common.sh@974 -- # wait 68600 00:13:23.175 15:18:05 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:23.741 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:24.309 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:24.309 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:24.309 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:24.309 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:24.309 00:13:24.309 real 2m34.264s 00:13:24.309 user 1m51.559s 00:13:24.309 sys 0m23.075s 00:13:24.309 15:18:07 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:24.309 15:18:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:24.309 ************************************ 00:13:24.309 END TEST sw_hotplug 00:13:24.309 ************************************ 00:13:24.568 15:18:07 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:13:24.568 15:18:07 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:24.568 15:18:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:24.568 15:18:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.568 15:18:07 -- common/autotest_common.sh@10 -- # set +x 00:13:24.568 ************************************ 00:13:24.568 START TEST nvme_xnvme 00:13:24.568 ************************************ 00:13:24.568 15:18:07 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:24.568 * Looking for test storage... 00:13:24.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:24.568 15:18:07 nvme_xnvme -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:13:24.568 15:18:07 nvme_xnvme -- common/autotest_common.sh@1689 -- # lcov --version 00:13:24.568 15:18:07 nvme_xnvme -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:13:24.828 15:18:07 nvme_xnvme -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:24.828 15:18:07 nvme_xnvme -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.828 15:18:07 nvme_xnvme -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:13:24.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.828 --rc genhtml_branch_coverage=1 00:13:24.828 --rc genhtml_function_coverage=1 00:13:24.828 --rc genhtml_legend=1 00:13:24.828 --rc geninfo_all_blocks=1 00:13:24.828 --rc geninfo_unexecuted_blocks=1 00:13:24.828 00:13:24.828 ' 00:13:24.828 15:18:07 nvme_xnvme -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:13:24.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.828 --rc genhtml_branch_coverage=1 00:13:24.828 --rc genhtml_function_coverage=1 00:13:24.828 --rc genhtml_legend=1 00:13:24.828 --rc geninfo_all_blocks=1 00:13:24.828 --rc geninfo_unexecuted_blocks=1 00:13:24.828 00:13:24.828 ' 00:13:24.828 15:18:07 nvme_xnvme -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:13:24.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.828 --rc genhtml_branch_coverage=1 00:13:24.828 --rc genhtml_function_coverage=1 00:13:24.828 --rc genhtml_legend=1 00:13:24.828 --rc geninfo_all_blocks=1 00:13:24.828 --rc geninfo_unexecuted_blocks=1 00:13:24.828 00:13:24.828 ' 00:13:24.828 15:18:07 nvme_xnvme -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:13:24.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.828 --rc genhtml_branch_coverage=1 00:13:24.828 --rc genhtml_function_coverage=1 00:13:24.828 --rc genhtml_legend=1 00:13:24.828 --rc geninfo_all_blocks=1 00:13:24.828 --rc geninfo_unexecuted_blocks=1 00:13:24.828 00:13:24.828 ' 00:13:24.828 15:18:07 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.828 15:18:07 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.828 15:18:07 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.828 15:18:07 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.828 15:18:07 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.828 15:18:07 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:24.828 15:18:07 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.828 15:18:07 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:24.828 15:18:07 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:24.828 15:18:07 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.828 15:18:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.828 ************************************ 00:13:24.828 START TEST xnvme_to_malloc_dd_copy 00:13:24.828 ************************************ 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:24.828 15:18:07 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:24.828 { 00:13:24.828 "subsystems": [ 00:13:24.828 { 00:13:24.828 "subsystem": "bdev", 00:13:24.828 "config": [ 00:13:24.828 { 00:13:24.828 "params": { 00:13:24.828 "block_size": 512, 00:13:24.828 "num_blocks": 2097152, 00:13:24.828 "name": "malloc0" 00:13:24.829 }, 00:13:24.829 "method": "bdev_malloc_create" 00:13:24.829 }, 00:13:24.829 { 00:13:24.829 "params": { 00:13:24.829 "io_mechanism": "libaio", 00:13:24.829 "filename": "/dev/nullb0", 00:13:24.829 "name": "null0" 00:13:24.829 }, 00:13:24.829 "method": "bdev_xnvme_create" 00:13:24.829 }, 00:13:24.829 { 00:13:24.829 "method": "bdev_wait_for_examine" 00:13:24.829 } 00:13:24.829 ] 00:13:24.829 } 00:13:24.829 ] 00:13:24.829 } 00:13:24.829 [2024-10-25 15:18:07.470344] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:13:24.829 [2024-10-25 15:18:07.470468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69964 ] 00:13:25.087 [2024-10-25 15:18:07.654538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.087 [2024-10-25 15:18:07.788149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.619  [2024-10-25T15:18:11.284Z] Copying: 243/1024 [MB] (243 MBps) [2024-10-25T15:18:12.660Z] Copying: 489/1024 [MB] (246 MBps) [2024-10-25T15:18:13.598Z] Copying: 734/1024 [MB] (244 MBps) [2024-10-25T15:18:13.598Z] Copying: 980/1024 [MB] (246 MBps) [2024-10-25T15:18:17.818Z] Copying: 1024/1024 [MB] (average 245 MBps) 00:13:35.090 00:13:35.090 15:18:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:35.090 15:18:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:35.090 15:18:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:35.090 15:18:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:35.090 { 00:13:35.090 "subsystems": [ 00:13:35.090 { 00:13:35.090 "subsystem": "bdev", 00:13:35.090 "config": [ 00:13:35.090 { 00:13:35.090 "params": { 00:13:35.090 "block_size": 512, 00:13:35.090 "num_blocks": 2097152, 00:13:35.090 "name": "malloc0" 00:13:35.090 }, 00:13:35.090 "method": "bdev_malloc_create" 00:13:35.090 }, 00:13:35.090 { 00:13:35.090 "params": { 00:13:35.090 "io_mechanism": "libaio", 00:13:35.090 "filename": "/dev/nullb0", 00:13:35.090 "name": "null0" 00:13:35.090 }, 00:13:35.090 "method": "bdev_xnvme_create" 00:13:35.090 }, 00:13:35.090 { 00:13:35.090 "method": "bdev_wait_for_examine" 00:13:35.090 } 00:13:35.090 ] 00:13:35.090 } 00:13:35.090 ] 00:13:35.091 } 00:13:35.091 [2024-10-25 15:18:17.740934] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:13:35.091 [2024-10-25 15:18:17.741060] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70084 ] 00:13:35.349 [2024-10-25 15:18:17.925463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.349 [2024-10-25 15:18:18.047213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.922  [2024-10-25T15:18:22.025Z] Copying: 220/1024 [MB] (220 MBps) [2024-10-25T15:18:22.590Z] Copying: 424/1024 [MB] (204 MBps) [2024-10-25T15:18:23.967Z] Copying: 630/1024 [MB] (205 MBps) [2024-10-25T15:18:24.535Z] Copying: 838/1024 [MB] (208 MBps) [2024-10-25T15:18:28.714Z] Copying: 1024/1024 [MB] (average 210 MBps) 00:13:45.986 00:13:45.986 15:18:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:45.986 15:18:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:45.986 15:18:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:45.986 15:18:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:45.986 15:18:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:45.986 15:18:28 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:45.986 { 00:13:45.986 "subsystems": [ 00:13:45.986 { 00:13:45.986 "subsystem": "bdev", 00:13:45.986 "config": [ 00:13:45.986 { 00:13:45.986 "params": { 00:13:45.986 "block_size": 512, 00:13:45.986 "num_blocks": 2097152, 00:13:45.986 "name": "malloc0" 00:13:45.986 }, 00:13:45.986 "method": "bdev_malloc_create" 00:13:45.986 }, 00:13:45.986 { 00:13:45.986 "params": { 00:13:45.986 "io_mechanism": "io_uring", 00:13:45.986 "filename": "/dev/nullb0", 00:13:45.986 "name": "null0" 00:13:45.986 }, 00:13:45.986 "method": "bdev_xnvme_create" 00:13:45.986 }, 00:13:45.986 { 00:13:45.986 "method": "bdev_wait_for_examine" 00:13:45.986 } 00:13:45.986 ] 00:13:45.986 } 00:13:45.986 ] 00:13:45.986 } 00:13:45.986 [2024-10-25 15:18:28.662947] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:13:45.986 [2024-10-25 15:18:28.663488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70207 ] 00:13:46.244 [2024-10-25 15:18:28.827885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.502 [2024-10-25 15:18:28.988110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.031  [2024-10-25T15:18:32.695Z] Copying: 266/1024 [MB] (266 MBps) [2024-10-25T15:18:33.630Z] Copying: 532/1024 [MB] (265 MBps) [2024-10-25T15:18:34.567Z] Copying: 799/1024 [MB] (266 MBps) [2024-10-25T15:18:38.757Z] Copying: 1024/1024 [MB] (average 267 MBps) 00:13:56.029 00:13:56.029 15:18:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:56.029 15:18:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:56.029 15:18:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:56.029 15:18:38 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:56.029 { 00:13:56.029 "subsystems": [ 00:13:56.029 { 00:13:56.029 "subsystem": "bdev", 00:13:56.029 "config": [ 00:13:56.029 { 00:13:56.029 "params": { 00:13:56.029 "block_size": 512, 00:13:56.029 "num_blocks": 2097152, 00:13:56.029 "name": "malloc0" 00:13:56.029 }, 00:13:56.029 "method": "bdev_malloc_create" 00:13:56.029 }, 00:13:56.029 { 00:13:56.029 "params": { 00:13:56.029 "io_mechanism": "io_uring", 00:13:56.029 "filename": "/dev/nullb0", 00:13:56.029 "name": "null0" 00:13:56.029 }, 00:13:56.029 "method": "bdev_xnvme_create" 00:13:56.029 }, 00:13:56.029 { 00:13:56.029 "method": "bdev_wait_for_examine" 00:13:56.029 } 00:13:56.029 ] 00:13:56.029 } 00:13:56.029 ] 00:13:56.029 } 00:13:56.029 [2024-10-25 15:18:38.470902] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:13:56.029 [2024-10-25 15:18:38.471042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70320 ] 00:13:56.029 [2024-10-25 15:18:38.654314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.287 [2024-10-25 15:18:38.771695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.818  [2024-10-25T15:18:42.483Z] Copying: 272/1024 [MB] (272 MBps) [2024-10-25T15:18:43.419Z] Copying: 543/1024 [MB] (271 MBps) [2024-10-25T15:18:44.354Z] Copying: 814/1024 [MB] (271 MBps) [2024-10-25T15:18:48.542Z] Copying: 1024/1024 [MB] (average 271 MBps) 00:14:05.814 00:14:05.815 15:18:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:14:05.815 15:18:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:05.815 ************************************ 00:14:05.815 END TEST xnvme_to_malloc_dd_copy 00:14:05.815 ************************************ 00:14:05.815 00:14:05.815 real 0m40.652s 00:14:05.815 user 0m35.606s 00:14:05.815 sys 0m4.495s 00:14:05.815 15:18:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:05.815 15:18:47 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:05.815 15:18:48 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:05.815 15:18:48 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:05.815 15:18:48 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:05.815 15:18:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:05.815 ************************************ 00:14:05.815 START TEST xnvme_bdevperf 00:14:05.815 ************************************ 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:05.815 15:18:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:05.815 { 00:14:05.815 "subsystems": [ 00:14:05.815 { 00:14:05.815 "subsystem": "bdev", 00:14:05.815 "config": [ 00:14:05.815 { 00:14:05.815 "params": { 00:14:05.815 "io_mechanism": "libaio", 00:14:05.815 "filename": "/dev/nullb0", 00:14:05.815 "name": "null0" 00:14:05.815 }, 00:14:05.815 "method": "bdev_xnvme_create" 00:14:05.815 }, 00:14:05.815 { 00:14:05.815 "method": "bdev_wait_for_examine" 00:14:05.815 } 00:14:05.815 ] 00:14:05.815 } 00:14:05.815 ] 00:14:05.815 } 00:14:05.815 [2024-10-25 15:18:48.195584] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:14:05.815 [2024-10-25 15:18:48.196441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70451 ] 00:14:05.815 [2024-10-25 15:18:48.381035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.815 [2024-10-25 15:18:48.505401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.381 Running I/O for 5 seconds... 00:14:08.305 146048.00 IOPS, 570.50 MiB/s [2024-10-25T15:18:51.971Z] 144096.00 IOPS, 562.88 MiB/s [2024-10-25T15:18:52.905Z] 142634.67 IOPS, 557.17 MiB/s [2024-10-25T15:18:54.281Z] 142336.00 IOPS, 556.00 MiB/s 00:14:11.553 Latency(us) 00:14:11.553 [2024-10-25T15:18:54.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.553 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:11.553 null0 : 5.00 142284.50 555.80 0.00 0.00 447.27 120.08 2013.46 00:14:11.553 [2024-10-25T15:18:54.281Z] =================================================================================================================== 00:14:11.553 [2024-10-25T15:18:54.281Z] Total : 142284.50 555.80 0.00 0.00 447.27 120.08 2013.46 00:14:12.487 15:18:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:12.488 15:18:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:12.488 15:18:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:12.488 15:18:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:12.488 15:18:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:12.488 15:18:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:12.488 { 00:14:12.488 "subsystems": [ 00:14:12.488 { 00:14:12.488 "subsystem": "bdev", 00:14:12.488 "config": [ 00:14:12.488 { 00:14:12.488 "params": { 00:14:12.488 "io_mechanism": "io_uring", 00:14:12.488 "filename": "/dev/nullb0", 00:14:12.488 "name": "null0" 00:14:12.488 }, 00:14:12.488 "method": "bdev_xnvme_create" 00:14:12.488 }, 00:14:12.488 { 00:14:12.488 "method": "bdev_wait_for_examine" 00:14:12.488 } 00:14:12.488 ] 00:14:12.488 } 00:14:12.488 ] 00:14:12.488 } 00:14:12.488 [2024-10-25 15:18:55.162621] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:14:12.488 [2024-10-25 15:18:55.162780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70531 ] 00:14:12.748 [2024-10-25 15:18:55.348715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.748 [2024-10-25 15:18:55.473553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.315 Running I/O for 5 seconds... 00:14:15.200 186560.00 IOPS, 728.75 MiB/s [2024-10-25T15:18:58.864Z] 186624.00 IOPS, 729.00 MiB/s [2024-10-25T15:19:00.241Z] 185642.67 IOPS, 725.17 MiB/s [2024-10-25T15:19:00.838Z] 185824.00 IOPS, 725.88 MiB/s 00:14:18.110 Latency(us) 00:14:18.110 [2024-10-25T15:19:00.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.110 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:18.110 null0 : 5.00 185948.11 726.36 0.00 0.00 341.66 202.33 1842.38 00:14:18.110 [2024-10-25T15:19:00.838Z] =================================================================================================================== 00:14:18.110 [2024-10-25T15:19:00.838Z] Total : 185948.11 726.36 0.00 0.00 341.66 202.33 1842.38 00:14:19.487 15:19:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:19.487 15:19:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:19.487 00:14:19.487 real 0m13.998s 00:14:19.487 user 0m10.390s 00:14:19.487 sys 0m3.385s 00:14:19.487 ************************************ 00:14:19.487 END TEST xnvme_bdevperf 00:14:19.487 ************************************ 00:14:19.487 15:19:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:19.487 15:19:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:19.487 ************************************ 00:14:19.487 END TEST nvme_xnvme 00:14:19.487 ************************************ 00:14:19.487 00:14:19.487 real 0m55.054s 00:14:19.487 user 0m46.185s 00:14:19.487 sys 0m8.098s 00:14:19.487 15:19:02 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:19.487 15:19:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:19.745 15:19:02 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:19.745 15:19:02 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:19.745 15:19:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:19.745 15:19:02 -- common/autotest_common.sh@10 -- # set +x 00:14:19.745 ************************************ 00:14:19.745 START TEST blockdev_xnvme 00:14:19.745 ************************************ 00:14:19.745 15:19:02 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:19.745 * Looking for test storage... 00:14:19.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:19.745 15:19:02 blockdev_xnvme -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:14:19.745 15:19:02 blockdev_xnvme -- common/autotest_common.sh@1689 -- # lcov --version 00:14:19.745 15:19:02 blockdev_xnvme -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:14:19.745 15:19:02 blockdev_xnvme -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:19.745 15:19:02 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:20.003 15:19:02 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:20.003 15:19:02 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:20.003 15:19:02 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:20.003 15:19:02 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:20.003 15:19:02 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:20.003 15:19:02 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:20.003 15:19:02 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:14:20.003 15:19:02 blockdev_xnvme -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:20.003 15:19:02 blockdev_xnvme -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:14:20.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.003 --rc genhtml_branch_coverage=1 00:14:20.003 --rc genhtml_function_coverage=1 00:14:20.003 --rc genhtml_legend=1 00:14:20.003 --rc geninfo_all_blocks=1 00:14:20.003 --rc geninfo_unexecuted_blocks=1 00:14:20.004 00:14:20.004 ' 00:14:20.004 15:19:02 blockdev_xnvme -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:14:20.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.004 --rc genhtml_branch_coverage=1 00:14:20.004 --rc genhtml_function_coverage=1 00:14:20.004 --rc genhtml_legend=1 00:14:20.004 --rc geninfo_all_blocks=1 00:14:20.004 --rc geninfo_unexecuted_blocks=1 00:14:20.004 00:14:20.004 ' 00:14:20.004 15:19:02 blockdev_xnvme -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:14:20.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.004 --rc genhtml_branch_coverage=1 00:14:20.004 --rc genhtml_function_coverage=1 00:14:20.004 --rc genhtml_legend=1 00:14:20.004 --rc geninfo_all_blocks=1 00:14:20.004 --rc geninfo_unexecuted_blocks=1 00:14:20.004 00:14:20.004 ' 00:14:20.004 15:19:02 blockdev_xnvme -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:14:20.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.004 --rc genhtml_branch_coverage=1 00:14:20.004 --rc genhtml_function_coverage=1 00:14:20.004 --rc genhtml_legend=1 00:14:20.004 --rc geninfo_all_blocks=1 00:14:20.004 --rc geninfo_unexecuted_blocks=1 00:14:20.004 00:14:20.004 ' 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=70684 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 70684 00:14:20.004 15:19:02 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 70684 ']' 00:14:20.004 15:19:02 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.004 15:19:02 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.004 15:19:02 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:20.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.004 15:19:02 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.004 15:19:02 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.004 15:19:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:20.004 [2024-10-25 15:19:02.620657] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:14:20.004 [2024-10-25 15:19:02.620991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70684 ] 00:14:20.262 [2024-10-25 15:19:02.804411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.262 [2024-10-25 15:19:02.927603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.198 15:19:03 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.198 15:19:03 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:14:21.198 15:19:03 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:14:21.198 15:19:03 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:14:21.198 15:19:03 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:21.198 15:19:03 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:21.198 15:19:03 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:21.768 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:22.028 Waiting for block devices as requested 00:14:22.287 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:22.287 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:22.547 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:22.547 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:27.820 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:27.820 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:27.820 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # zoned_devs=() 00:14:27.820 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1653 -- # local -gA zoned_devs 00:14:27.820 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1654 -- # local nvme bdf 00:14:27.820 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:14:27.820 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme0n1 00:14:27.820 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:14:27.820 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:27.820 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:14:27.820 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:14:27.820 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme1n1 00:14:27.820 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme1n1 00:14:27.820 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:27.820 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:14:27.820 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:14:27.820 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n1 00:14:27.820 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme2n1 00:14:27.820 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:27.820 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n2 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme2n2 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme2n3 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme2n3 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3c3n1 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme3c3n1 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1657 -- # is_block_zoned nvme3n1 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1646 -- # local device=nvme3n1 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.821 nvme0n1 00:14:27.821 nvme1n1 00:14:27.821 nvme2n1 00:14:27.821 nvme2n2 00:14:27.821 nvme2n3 00:14:27.821 nvme3n1 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "700cde60-7334-401d-9753-e50924d0be70"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "700cde60-7334-401d-9753-e50924d0be70",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "c96a78ac-31f2-477d-85ae-51eaa35a18df"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c96a78ac-31f2-477d-85ae-51eaa35a18df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "7db494c8-0e8a-46ea-90d2-1db204a440e4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7db494c8-0e8a-46ea-90d2-1db204a440e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "e81fe934-3c10-4d48-a921-0b65248c6750"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e81fe934-3c10-4d48-a921-0b65248c6750",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "4744718d-b2db-49c6-98c6-a620c8504a96"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4744718d-b2db-49c6-98c6-a620c8504a96",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "80bf9b5d-0c21-4547-8724-f9b7b692d328"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "80bf9b5d-0c21-4547-8724-f9b7b692d328",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:14:27.821 15:19:10 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 70684 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 70684 ']' 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 70684 00:14:27.821 15:19:10 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:14:28.081 15:19:10 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:28.081 15:19:10 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70684 00:14:28.081 killing process with pid 70684 00:14:28.081 15:19:10 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:28.081 15:19:10 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:28.081 15:19:10 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70684' 00:14:28.081 15:19:10 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 70684 00:14:28.081 15:19:10 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 70684 00:14:30.651 15:19:13 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:30.652 15:19:13 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:30.652 15:19:13 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:30.652 15:19:13 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:30.652 15:19:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:30.652 ************************************ 00:14:30.652 START TEST bdev_hello_world 00:14:30.652 ************************************ 00:14:30.652 15:19:13 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:30.652 [2024-10-25 15:19:13.211500] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:14:30.652 [2024-10-25 15:19:13.211869] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71067 ] 00:14:30.911 [2024-10-25 15:19:13.394955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.911 [2024-10-25 15:19:13.517186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.479 [2024-10-25 15:19:13.969622] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:31.479 [2024-10-25 15:19:13.969678] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:31.479 [2024-10-25 15:19:13.969699] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:31.479 [2024-10-25 15:19:13.971983] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:31.479 [2024-10-25 15:19:13.972356] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:31.479 [2024-10-25 15:19:13.972377] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:31.479 [2024-10-25 15:19:13.972684] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:31.479 00:14:31.479 [2024-10-25 15:19:13.972723] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:32.879 00:14:32.879 real 0m2.048s 00:14:32.879 user 0m1.667s 00:14:32.879 sys 0m0.264s 00:14:32.879 ************************************ 00:14:32.879 END TEST bdev_hello_world 00:14:32.879 ************************************ 00:14:32.879 15:19:15 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:32.879 15:19:15 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:32.879 15:19:15 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:14:32.879 15:19:15 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:32.879 15:19:15 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:32.879 15:19:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:32.879 ************************************ 00:14:32.879 START TEST bdev_bounds 00:14:32.879 ************************************ 00:14:32.879 15:19:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:14:32.879 Process bdevio pid: 71109 00:14:32.879 15:19:15 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71109 00:14:32.879 15:19:15 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:32.879 15:19:15 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:32.879 15:19:15 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71109' 00:14:32.879 15:19:15 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71109 00:14:32.879 15:19:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 71109 ']' 00:14:32.879 15:19:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.879 15:19:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:32.879 15:19:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.879 15:19:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:32.879 15:19:15 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:32.879 [2024-10-25 15:19:15.338477] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:14:32.879 [2024-10-25 15:19:15.338840] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71109 ] 00:14:32.879 [2024-10-25 15:19:15.523286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:33.138 [2024-10-25 15:19:15.651948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.138 [2024-10-25 15:19:15.652111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.138 [2024-10-25 15:19:15.652140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.705 15:19:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:33.705 15:19:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:14:33.705 15:19:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:33.705 I/O targets: 00:14:33.705 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:33.705 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:33.705 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:33.705 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:33.705 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:33.705 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:33.705 00:14:33.705 00:14:33.705 CUnit - A unit testing framework for C - Version 2.1-3 00:14:33.705 http://cunit.sourceforge.net/ 00:14:33.705 00:14:33.705 00:14:33.705 Suite: bdevio tests on: nvme3n1 00:14:33.705 Test: blockdev write read block ...passed 00:14:33.705 Test: blockdev write zeroes read block ...passed 00:14:33.705 Test: blockdev write zeroes read no split ...passed 00:14:33.705 Test: blockdev write zeroes read split ...passed 00:14:33.705 Test: blockdev write zeroes read split partial ...passed 00:14:33.705 Test: blockdev reset ...passed 00:14:33.705 Test: blockdev write read 8 blocks ...passed 00:14:33.705 Test: blockdev write read size > 128k ...passed 00:14:33.705 Test: blockdev write read invalid size ...passed 00:14:33.705 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:33.705 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:33.705 Test: blockdev write read max offset ...passed 00:14:33.705 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:33.705 Test: blockdev writev readv 8 blocks ...passed 00:14:33.705 Test: blockdev writev readv 30 x 1block ...passed 00:14:33.705 Test: blockdev writev readv block ...passed 00:14:33.705 Test: blockdev writev readv size > 128k ...passed 00:14:33.705 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:33.705 Test: blockdev comparev and writev ...passed 00:14:33.705 Test: blockdev nvme passthru rw ...passed 00:14:33.705 Test: blockdev nvme passthru vendor specific ...passed 00:14:33.705 Test: blockdev nvme admin passthru ...passed 00:14:33.705 Test: blockdev copy ...passed 00:14:33.705 Suite: bdevio tests on: nvme2n3 00:14:33.705 Test: blockdev write read block ...passed 00:14:33.705 Test: blockdev write zeroes read block ...passed 00:14:33.705 Test: blockdev write zeroes read no split ...passed 00:14:33.963 Test: blockdev write zeroes read split ...passed 00:14:33.963 Test: blockdev write zeroes read split partial ...passed 00:14:33.963 Test: blockdev reset ...passed 00:14:33.963 Test: blockdev write read 8 blocks ...passed 00:14:33.963 Test: blockdev write read size > 128k ...passed 00:14:33.963 Test: blockdev write read invalid size ...passed 00:14:33.963 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:33.963 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:33.963 Test: blockdev write read max offset ...passed 00:14:33.963 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:33.963 Test: blockdev writev readv 8 blocks ...passed 00:14:33.963 Test: blockdev writev readv 30 x 1block ...passed 00:14:33.963 Test: blockdev writev readv block ...passed 00:14:33.963 Test: blockdev writev readv size > 128k ...passed 00:14:33.963 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:33.963 Test: blockdev comparev and writev ...passed 00:14:33.963 Test: blockdev nvme passthru rw ...passed 00:14:33.963 Test: blockdev nvme passthru vendor specific ...passed 00:14:33.963 Test: blockdev nvme admin passthru ...passed 00:14:33.963 Test: blockdev copy ...passed 00:14:33.963 Suite: bdevio tests on: nvme2n2 00:14:33.963 Test: blockdev write read block ...passed 00:14:33.963 Test: blockdev write zeroes read block ...passed 00:14:33.963 Test: blockdev write zeroes read no split ...passed 00:14:33.963 Test: blockdev write zeroes read split ...passed 00:14:33.963 Test: blockdev write zeroes read split partial ...passed 00:14:33.963 Test: blockdev reset ...passed 00:14:33.963 Test: blockdev write read 8 blocks ...passed 00:14:33.963 Test: blockdev write read size > 128k ...passed 00:14:33.963 Test: blockdev write read invalid size ...passed 00:14:33.963 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:33.963 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:33.963 Test: blockdev write read max offset ...passed 00:14:33.963 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:33.963 Test: blockdev writev readv 8 blocks ...passed 00:14:33.963 Test: blockdev writev readv 30 x 1block ...passed 00:14:33.963 Test: blockdev writev readv block ...passed 00:14:33.963 Test: blockdev writev readv size > 128k ...passed 00:14:33.963 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:33.963 Test: blockdev comparev and writev ...passed 00:14:33.963 Test: blockdev nvme passthru rw ...passed 00:14:33.963 Test: blockdev nvme passthru vendor specific ...passed 00:14:33.964 Test: blockdev nvme admin passthru ...passed 00:14:33.964 Test: blockdev copy ...passed 00:14:33.964 Suite: bdevio tests on: nvme2n1 00:14:33.964 Test: blockdev write read block ...passed 00:14:33.964 Test: blockdev write zeroes read block ...passed 00:14:33.964 Test: blockdev write zeroes read no split ...passed 00:14:33.964 Test: blockdev write zeroes read split ...passed 00:14:33.964 Test: blockdev write zeroes read split partial ...passed 00:14:33.964 Test: blockdev reset ...passed 00:14:33.964 Test: blockdev write read 8 blocks ...passed 00:14:33.964 Test: blockdev write read size > 128k ...passed 00:14:33.964 Test: blockdev write read invalid size ...passed 00:14:33.964 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:33.964 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:33.964 Test: blockdev write read max offset ...passed 00:14:33.964 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:33.964 Test: blockdev writev readv 8 blocks ...passed 00:14:33.964 Test: blockdev writev readv 30 x 1block ...passed 00:14:33.964 Test: blockdev writev readv block ...passed 00:14:33.964 Test: blockdev writev readv size > 128k ...passed 00:14:33.964 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:33.964 Test: blockdev comparev and writev ...passed 00:14:33.964 Test: blockdev nvme passthru rw ...passed 00:14:33.964 Test: blockdev nvme passthru vendor specific ...passed 00:14:33.964 Test: blockdev nvme admin passthru ...passed 00:14:33.964 Test: blockdev copy ...passed 00:14:33.964 Suite: bdevio tests on: nvme1n1 00:14:33.964 Test: blockdev write read block ...passed 00:14:33.964 Test: blockdev write zeroes read block ...passed 00:14:33.964 Test: blockdev write zeroes read no split ...passed 00:14:34.223 Test: blockdev write zeroes read split ...passed 00:14:34.223 Test: blockdev write zeroes read split partial ...passed 00:14:34.223 Test: blockdev reset ...passed 00:14:34.223 Test: blockdev write read 8 blocks ...passed 00:14:34.223 Test: blockdev write read size > 128k ...passed 00:14:34.223 Test: blockdev write read invalid size ...passed 00:14:34.223 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:34.223 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:34.223 Test: blockdev write read max offset ...passed 00:14:34.223 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:34.223 Test: blockdev writev readv 8 blocks ...passed 00:14:34.223 Test: blockdev writev readv 30 x 1block ...passed 00:14:34.223 Test: blockdev writev readv block ...passed 00:14:34.223 Test: blockdev writev readv size > 128k ...passed 00:14:34.223 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:34.223 Test: blockdev comparev and writev ...passed 00:14:34.223 Test: blockdev nvme passthru rw ...passed 00:14:34.223 Test: blockdev nvme passthru vendor specific ...passed 00:14:34.223 Test: blockdev nvme admin passthru ...passed 00:14:34.223 Test: blockdev copy ...passed 00:14:34.223 Suite: bdevio tests on: nvme0n1 00:14:34.223 Test: blockdev write read block ...passed 00:14:34.223 Test: blockdev write zeroes read block ...passed 00:14:34.223 Test: blockdev write zeroes read no split ...passed 00:14:34.223 Test: blockdev write zeroes read split ...passed 00:14:34.223 Test: blockdev write zeroes read split partial ...passed 00:14:34.223 Test: blockdev reset ...passed 00:14:34.223 Test: blockdev write read 8 blocks ...passed 00:14:34.223 Test: blockdev write read size > 128k ...passed 00:14:34.223 Test: blockdev write read invalid size ...passed 00:14:34.223 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:34.223 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:34.223 Test: blockdev write read max offset ...passed 00:14:34.223 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:34.223 Test: blockdev writev readv 8 blocks ...passed 00:14:34.223 Test: blockdev writev readv 30 x 1block ...passed 00:14:34.223 Test: blockdev writev readv block ...passed 00:14:34.223 Test: blockdev writev readv size > 128k ...passed 00:14:34.223 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:34.223 Test: blockdev comparev and writev ...passed 00:14:34.223 Test: blockdev nvme passthru rw ...passed 00:14:34.223 Test: blockdev nvme passthru vendor specific ...passed 00:14:34.223 Test: blockdev nvme admin passthru ...passed 00:14:34.223 Test: blockdev copy ...passed 00:14:34.223 00:14:34.223 Run Summary: Type Total Ran Passed Failed Inactive 00:14:34.223 suites 6 6 n/a 0 0 00:14:34.223 tests 138 138 138 0 0 00:14:34.223 asserts 780 780 780 0 n/a 00:14:34.223 00:14:34.223 Elapsed time = 1.403 seconds 00:14:34.223 0 00:14:34.223 15:19:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71109 00:14:34.223 15:19:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 71109 ']' 00:14:34.223 15:19:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 71109 00:14:34.223 15:19:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:14:34.223 15:19:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:34.223 15:19:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71109 00:14:34.223 killing process with pid 71109 00:14:34.223 15:19:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:34.223 15:19:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:34.223 15:19:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71109' 00:14:34.223 15:19:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 71109 00:14:34.223 15:19:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 71109 00:14:35.598 15:19:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:35.598 00:14:35.598 real 0m2.867s 00:14:35.598 user 0m7.117s 00:14:35.598 sys 0m0.425s 00:14:35.598 15:19:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:35.598 15:19:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:35.598 ************************************ 00:14:35.598 END TEST bdev_bounds 00:14:35.598 ************************************ 00:14:35.598 15:19:18 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:35.598 15:19:18 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:35.598 15:19:18 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:35.598 15:19:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:35.598 ************************************ 00:14:35.598 START TEST bdev_nbd 00:14:35.598 ************************************ 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71172 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71172 /var/tmp/spdk-nbd.sock 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 71172 ']' 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:35.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:35.598 15:19:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:35.598 [2024-10-25 15:19:18.292468] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:14:35.598 [2024-10-25 15:19:18.292816] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.857 [2024-10-25 15:19:18.479948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.115 [2024-10-25 15:19:18.610911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.681 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:36.681 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:14:36.681 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:36.681 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:36.681 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:36.681 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:36.681 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:36.681 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:36.681 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:36.681 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:36.681 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:36.681 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:36.681 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:36.681 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:36.681 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:36.939 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:36.939 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:36.939 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:36.939 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:36.939 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:36.939 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:36.939 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:36.939 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:36.939 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:36.939 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:36.939 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:36.939 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.939 1+0 records in 00:14:36.939 1+0 records out 00:14:36.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574288 s, 7.1 MB/s 00:14:36.939 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.939 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:36.939 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.939 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:36.939 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:36.939 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:36.939 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:36.939 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:37.199 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:37.199 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:37.199 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:37.199 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:37.199 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:37.199 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:37.199 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:37.199 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:37.199 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:37.199 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:37.199 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:37.199 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:37.199 1+0 records in 00:14:37.199 1+0 records out 00:14:37.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00078834 s, 5.2 MB/s 00:14:37.199 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.199 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:37.199 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.199 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:37.199 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:37.199 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:37.199 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:37.199 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:37.459 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:37.459 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:37.459 15:19:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:37.459 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:14:37.459 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:37.459 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:37.459 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:37.459 15:19:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:14:37.459 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:37.459 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:37.459 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:37.459 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:37.459 1+0 records in 00:14:37.459 1+0 records out 00:14:37.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631622 s, 6.5 MB/s 00:14:37.459 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.459 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:37.459 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.459 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:37.459 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:37.459 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:37.459 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:37.459 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:14:37.717 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:37.717 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:37.717 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:37.717 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:14:37.717 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:37.717 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:37.717 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:37.717 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:14:37.717 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:37.717 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:37.717 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:37.717 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:37.717 1+0 records in 00:14:37.717 1+0 records out 00:14:37.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000672119 s, 6.1 MB/s 00:14:37.717 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.717 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:37.717 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.717 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:37.717 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:37.717 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:37.717 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:37.717 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:14:37.985 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:37.985 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:37.985 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:37.985 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:14:37.985 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:37.985 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:37.985 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:37.985 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:14:37.985 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:37.985 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:37.985 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:37.985 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:37.985 1+0 records in 00:14:37.985 1+0 records out 00:14:37.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000914139 s, 4.5 MB/s 00:14:37.985 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.985 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:37.985 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.985 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:37.985 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:37.985 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:37.985 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:37.985 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:38.243 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:38.243 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:38.243 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:38.243 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:14:38.243 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:38.243 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:38.243 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:38.243 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:14:38.243 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:38.243 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:38.243 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:38.243 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.243 1+0 records in 00:14:38.243 1+0 records out 00:14:38.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736916 s, 5.6 MB/s 00:14:38.243 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.243 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:38.243 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.243 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:38.243 15:19:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:38.243 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:38.243 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:38.243 15:19:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:38.502 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:38.502 { 00:14:38.502 "nbd_device": "/dev/nbd0", 00:14:38.502 "bdev_name": "nvme0n1" 00:14:38.502 }, 00:14:38.502 { 00:14:38.502 "nbd_device": "/dev/nbd1", 00:14:38.502 "bdev_name": "nvme1n1" 00:14:38.502 }, 00:14:38.502 { 00:14:38.502 "nbd_device": "/dev/nbd2", 00:14:38.502 "bdev_name": "nvme2n1" 00:14:38.502 }, 00:14:38.502 { 00:14:38.502 "nbd_device": "/dev/nbd3", 00:14:38.502 "bdev_name": "nvme2n2" 00:14:38.502 }, 00:14:38.502 { 00:14:38.502 "nbd_device": "/dev/nbd4", 00:14:38.502 "bdev_name": "nvme2n3" 00:14:38.502 }, 00:14:38.502 { 00:14:38.502 "nbd_device": "/dev/nbd5", 00:14:38.502 "bdev_name": "nvme3n1" 00:14:38.502 } 00:14:38.502 ]' 00:14:38.502 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:38.502 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:38.502 { 00:14:38.502 "nbd_device": "/dev/nbd0", 00:14:38.502 "bdev_name": "nvme0n1" 00:14:38.502 }, 00:14:38.502 { 00:14:38.502 "nbd_device": "/dev/nbd1", 00:14:38.502 "bdev_name": "nvme1n1" 00:14:38.502 }, 00:14:38.502 { 00:14:38.502 "nbd_device": "/dev/nbd2", 00:14:38.502 "bdev_name": "nvme2n1" 00:14:38.502 }, 00:14:38.502 { 00:14:38.502 "nbd_device": "/dev/nbd3", 00:14:38.502 "bdev_name": "nvme2n2" 00:14:38.502 }, 00:14:38.502 { 00:14:38.502 "nbd_device": "/dev/nbd4", 00:14:38.502 "bdev_name": "nvme2n3" 00:14:38.502 }, 00:14:38.502 { 00:14:38.502 "nbd_device": "/dev/nbd5", 00:14:38.502 "bdev_name": "nvme3n1" 00:14:38.502 } 00:14:38.502 ]' 00:14:38.502 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:38.502 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:38.502 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:38.502 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:38.502 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:38.502 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:38.502 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.502 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:38.760 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:38.760 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:38.760 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:38.760 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:38.760 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:38.760 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:38.760 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:38.760 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:38.760 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.760 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:39.018 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:39.018 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:39.018 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:39.018 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.018 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.018 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:39.018 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:39.018 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.018 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.018 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:39.276 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:39.276 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:39.276 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:39.276 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.276 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.276 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:39.276 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:39.276 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.276 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.276 15:19:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:39.535 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:39.535 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:39.535 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:39.535 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.535 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.535 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:39.535 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:39.535 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.535 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.535 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:39.793 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:39.793 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:39.793 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:39.793 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.793 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.793 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:39.793 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:39.793 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.793 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.793 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:40.052 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:40.052 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:40.052 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:40.052 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:40.052 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:40.052 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:40.052 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:40.052 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:40.052 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:40.052 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:40.052 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:40.311 15:19:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:40.570 /dev/nbd0 00:14:40.570 15:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:40.570 15:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:40.570 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:40.570 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:40.570 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:40.570 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:40.570 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:40.570 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:40.570 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:40.570 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:40.570 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:40.570 1+0 records in 00:14:40.570 1+0 records out 00:14:40.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486238 s, 8.4 MB/s 00:14:40.570 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.570 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:40.570 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.570 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:40.570 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:40.570 15:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:40.570 15:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:40.570 15:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:14:40.829 /dev/nbd1 00:14:40.829 15:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:40.829 15:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:40.829 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:40.829 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:40.829 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:40.829 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:40.829 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:40.829 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:40.829 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:40.829 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:40.829 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:40.829 1+0 records in 00:14:40.829 1+0 records out 00:14:40.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000693812 s, 5.9 MB/s 00:14:40.829 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.829 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:40.829 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.829 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:40.829 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:40.829 15:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:40.829 15:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:40.829 15:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:14:41.087 /dev/nbd10 00:14:41.087 15:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:41.087 15:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:41.087 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:14:41.087 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:41.087 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:41.087 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:41.087 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:14:41.087 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:41.087 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:41.087 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:41.087 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.087 1+0 records in 00:14:41.087 1+0 records out 00:14:41.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000860387 s, 4.8 MB/s 00:14:41.087 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.087 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:41.087 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.087 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:41.087 15:19:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:41.087 15:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.087 15:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:41.087 15:19:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:14:41.346 /dev/nbd11 00:14:41.346 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:41.346 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:41.346 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:14:41.346 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:41.346 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:41.346 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:41.346 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:14:41.346 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:41.346 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:41.346 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:41.346 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.346 1+0 records in 00:14:41.346 1+0 records out 00:14:41.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000956967 s, 4.3 MB/s 00:14:41.346 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.346 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:41.346 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.346 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:41.346 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:41.346 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.346 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:41.346 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:14:41.605 /dev/nbd12 00:14:41.605 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:41.605 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:41.605 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:14:41.605 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:41.605 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:41.605 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:41.605 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:14:41.605 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:41.605 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:41.605 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:41.605 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.605 1+0 records in 00:14:41.605 1+0 records out 00:14:41.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067328 s, 6.1 MB/s 00:14:41.605 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.605 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:41.605 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.864 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:41.864 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:41.864 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.864 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:41.864 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:14:41.864 /dev/nbd13 00:14:41.864 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:41.864 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:41.864 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:14:41.864 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:41.864 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:41.864 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:41.864 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:14:41.864 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:41.864 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:41.864 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:41.864 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.864 1+0 records in 00:14:41.864 1+0 records out 00:14:41.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000648184 s, 6.3 MB/s 00:14:41.864 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.122 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:42.123 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.123 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:42.123 15:19:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:42.123 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:42.123 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:42.123 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:42.123 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:42.123 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:42.123 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:42.123 { 00:14:42.123 "nbd_device": "/dev/nbd0", 00:14:42.123 "bdev_name": "nvme0n1" 00:14:42.123 }, 00:14:42.123 { 00:14:42.123 "nbd_device": "/dev/nbd1", 00:14:42.123 "bdev_name": "nvme1n1" 00:14:42.123 }, 00:14:42.123 { 00:14:42.123 "nbd_device": "/dev/nbd10", 00:14:42.123 "bdev_name": "nvme2n1" 00:14:42.123 }, 00:14:42.123 { 00:14:42.123 "nbd_device": "/dev/nbd11", 00:14:42.123 "bdev_name": "nvme2n2" 00:14:42.123 }, 00:14:42.123 { 00:14:42.123 "nbd_device": "/dev/nbd12", 00:14:42.123 "bdev_name": "nvme2n3" 00:14:42.123 }, 00:14:42.123 { 00:14:42.123 "nbd_device": "/dev/nbd13", 00:14:42.123 "bdev_name": "nvme3n1" 00:14:42.123 } 00:14:42.123 ]' 00:14:42.123 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:42.123 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:42.123 { 00:14:42.123 "nbd_device": "/dev/nbd0", 00:14:42.123 "bdev_name": "nvme0n1" 00:14:42.123 }, 00:14:42.123 { 00:14:42.123 "nbd_device": "/dev/nbd1", 00:14:42.123 "bdev_name": "nvme1n1" 00:14:42.123 }, 00:14:42.123 { 00:14:42.123 "nbd_device": "/dev/nbd10", 00:14:42.123 "bdev_name": "nvme2n1" 00:14:42.123 }, 00:14:42.123 { 00:14:42.123 "nbd_device": "/dev/nbd11", 00:14:42.123 "bdev_name": "nvme2n2" 00:14:42.123 }, 00:14:42.123 { 00:14:42.123 "nbd_device": "/dev/nbd12", 00:14:42.123 "bdev_name": "nvme2n3" 00:14:42.123 }, 00:14:42.123 { 00:14:42.123 "nbd_device": "/dev/nbd13", 00:14:42.123 "bdev_name": "nvme3n1" 00:14:42.123 } 00:14:42.123 ]' 00:14:42.381 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:42.381 /dev/nbd1 00:14:42.381 /dev/nbd10 00:14:42.381 /dev/nbd11 00:14:42.381 /dev/nbd12 00:14:42.381 /dev/nbd13' 00:14:42.381 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:42.381 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:42.381 /dev/nbd1 00:14:42.381 /dev/nbd10 00:14:42.381 /dev/nbd11 00:14:42.381 /dev/nbd12 00:14:42.381 /dev/nbd13' 00:14:42.381 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:14:42.381 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:14:42.381 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:14:42.381 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:42.381 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:42.381 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:42.381 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:42.381 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:42.381 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:42.381 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:42.381 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:42.381 256+0 records in 00:14:42.381 256+0 records out 00:14:42.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122375 s, 85.7 MB/s 00:14:42.381 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:42.381 15:19:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:42.381 256+0 records in 00:14:42.381 256+0 records out 00:14:42.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117689 s, 8.9 MB/s 00:14:42.381 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:42.381 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:42.640 256+0 records in 00:14:42.640 256+0 records out 00:14:42.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149365 s, 7.0 MB/s 00:14:42.640 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:42.640 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:42.640 256+0 records in 00:14:42.640 256+0 records out 00:14:42.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124446 s, 8.4 MB/s 00:14:42.640 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:42.640 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:42.900 256+0 records in 00:14:42.900 256+0 records out 00:14:42.900 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129291 s, 8.1 MB/s 00:14:42.900 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:42.900 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:42.900 256+0 records in 00:14:42.900 256+0 records out 00:14:42.900 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125519 s, 8.4 MB/s 00:14:42.900 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:42.900 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:43.160 256+0 records in 00:14:43.160 256+0 records out 00:14:43.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126523 s, 8.3 MB/s 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:43.160 15:19:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:43.420 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:43.420 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:43.420 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:43.420 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:43.420 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:43.420 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:43.420 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:43.420 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:43.420 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:43.420 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:43.680 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:43.681 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:43.681 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:43.681 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:43.681 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:43.681 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:43.681 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:43.681 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:43.681 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:43.681 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:43.955 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:43.955 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:43.955 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:43.955 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:43.955 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:43.955 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:43.955 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:43.955 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:43.955 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:43.955 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:44.214 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:44.214 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:44.214 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:44.214 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.214 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.214 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:44.214 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:44.214 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.215 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.215 15:19:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:44.473 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:44.473 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:44.473 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:44.473 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.473 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.473 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:44.473 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:44.473 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.473 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.473 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:44.732 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:44.732 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:44.732 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:44.732 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.732 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.732 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:44.732 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:44.732 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.732 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:44.733 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:44.733 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:44.994 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:44.994 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:44.994 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:44.994 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:44.994 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:44.994 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:44.994 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:44.994 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:44.994 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:44.994 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:44.994 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:44.994 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:44.995 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:44.995 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:44.995 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:14:44.995 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:45.254 malloc_lvol_verify 00:14:45.254 15:19:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:45.512 b3efcf80-246c-4ed0-a47c-77452c4b3ea5 00:14:45.512 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:45.769 96b711e1-3c33-4f66-acdc-4868d6d3a0d0 00:14:45.769 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:46.027 /dev/nbd0 00:14:46.027 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:14:46.027 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:14:46.027 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:14:46.027 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:14:46.027 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:14:46.027 mke2fs 1.47.0 (5-Feb-2023) 00:14:46.027 Discarding device blocks: 0/4096 done 00:14:46.027 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:46.027 00:14:46.027 Allocating group tables: 0/1 done 00:14:46.027 Writing inode tables: 0/1 done 00:14:46.027 Creating journal (1024 blocks): done 00:14:46.027 Writing superblocks and filesystem accounting information: 0/1 done 00:14:46.027 00:14:46.027 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:46.027 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:46.027 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:46.027 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:46.027 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:46.027 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.027 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:46.285 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:46.285 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:46.285 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:46.285 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.285 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.285 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:46.285 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:46.286 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.286 15:19:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71172 00:14:46.286 15:19:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 71172 ']' 00:14:46.286 15:19:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 71172 00:14:46.286 15:19:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:14:46.286 15:19:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:46.286 15:19:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71172 00:14:46.286 killing process with pid 71172 00:14:46.286 15:19:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:46.286 15:19:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:46.286 15:19:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71172' 00:14:46.286 15:19:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 71172 00:14:46.286 15:19:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 71172 00:14:47.662 15:19:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:47.662 00:14:47.662 real 0m11.969s 00:14:47.662 user 0m15.586s 00:14:47.662 sys 0m5.029s 00:14:47.662 15:19:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:47.662 15:19:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:47.662 ************************************ 00:14:47.662 END TEST bdev_nbd 00:14:47.662 ************************************ 00:14:47.662 15:19:30 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:14:47.662 15:19:30 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:14:47.662 15:19:30 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:14:47.662 15:19:30 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:14:47.662 15:19:30 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:47.662 15:19:30 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:47.662 15:19:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:47.662 ************************************ 00:14:47.662 START TEST bdev_fio 00:14:47.662 ************************************ 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:47.662 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:47.662 ************************************ 00:14:47.662 START TEST bdev_fio_rw_verify 00:14:47.662 ************************************ 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:47.662 15:19:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:14:47.921 15:19:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:47.921 15:19:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:47.921 15:19:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:14:47.921 15:19:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:47.921 15:19:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:47.921 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:47.921 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:47.921 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:47.921 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:47.921 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:47.921 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:47.921 fio-3.35 00:14:47.921 Starting 6 threads 00:15:00.132 00:15:00.132 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=71589: Fri Oct 25 15:19:41 2024 00:15:00.132 read: IOPS=31.6k, BW=123MiB/s (129MB/s)(1234MiB/10001msec) 00:15:00.132 slat (usec): min=2, max=1691, avg= 6.35, stdev= 6.30 00:15:00.132 clat (usec): min=88, max=41188, avg=562.41, stdev=288.19 00:15:00.132 lat (usec): min=92, max=41196, avg=568.76, stdev=289.10 00:15:00.132 clat percentiles (usec): 00:15:00.132 | 50.000th=[ 537], 99.000th=[ 1369], 99.900th=[ 2376], 99.990th=[ 4817], 00:15:00.132 | 99.999th=[ 9765] 00:15:00.132 write: IOPS=32.0k, BW=125MiB/s (131MB/s)(1250MiB/10001msec); 0 zone resets 00:15:00.132 slat (usec): min=6, max=4067, avg=27.75, stdev=39.00 00:15:00.132 clat (usec): min=93, max=42310, avg=670.90, stdev=628.49 00:15:00.132 lat (usec): min=110, max=42363, avg=698.65, stdev=631.75 00:15:00.132 clat percentiles (usec): 00:15:00.132 | 50.000th=[ 627], 99.000th=[ 1614], 99.900th=[ 2474], 99.990th=[38536], 00:15:00.132 | 99.999th=[42206] 00:15:00.132 bw ( KiB/s): min=100381, max=156583, per=100.00%, avg=128978.05, stdev=2501.68, samples=114 00:15:00.132 iops : min=25093, max=39145, avg=32243.79, stdev=625.44, samples=114 00:15:00.132 lat (usec) : 100=0.01%, 250=6.25%, 500=30.30%, 750=38.48%, 1000=17.40% 00:15:00.132 lat (msec) : 2=7.32%, 4=0.22%, 10=0.02%, 20=0.01%, 50=0.01% 00:15:00.132 cpu : usr=52.44%, sys=31.61%, ctx=8390, majf=0, minf=26524 00:15:00.132 IO depths : 1=11.8%, 2=24.3%, 4=50.7%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:00.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.132 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.132 issued rwts: total=315960,320053,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:00.132 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:00.132 00:15:00.132 Run status group 0 (all jobs): 00:15:00.132 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=1234MiB (1294MB), run=10001-10001msec 00:15:00.132 WRITE: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=1250MiB (1311MB), run=10001-10001msec 00:15:00.394 ----------------------------------------------------- 00:15:00.394 Suppressions used: 00:15:00.394 count bytes template 00:15:00.394 6 48 /usr/src/fio/parse.c 00:15:00.394 3867 371232 /usr/src/fio/iolog.c 00:15:00.394 1 8 libtcmalloc_minimal.so 00:15:00.394 1 904 libcrypto.so 00:15:00.394 ----------------------------------------------------- 00:15:00.394 00:15:00.394 00:15:00.394 real 0m12.629s 00:15:00.394 user 0m33.538s 00:15:00.394 sys 0m19.403s 00:15:00.395 15:19:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:00.395 15:19:42 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:00.395 ************************************ 00:15:00.395 END TEST bdev_fio_rw_verify 00:15:00.395 ************************************ 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "700cde60-7334-401d-9753-e50924d0be70"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "700cde60-7334-401d-9753-e50924d0be70",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "c96a78ac-31f2-477d-85ae-51eaa35a18df"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c96a78ac-31f2-477d-85ae-51eaa35a18df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "7db494c8-0e8a-46ea-90d2-1db204a440e4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7db494c8-0e8a-46ea-90d2-1db204a440e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "e81fe934-3c10-4d48-a921-0b65248c6750"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e81fe934-3c10-4d48-a921-0b65248c6750",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "4744718d-b2db-49c6-98c6-a620c8504a96"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4744718d-b2db-49c6-98c6-a620c8504a96",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "80bf9b5d-0c21-4547-8724-f9b7b692d328"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "80bf9b5d-0c21-4547-8724-f9b7b692d328",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:00.395 /home/vagrant/spdk_repo/spdk 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:00.395 00:15:00.395 real 0m12.885s 00:15:00.395 user 0m33.675s 00:15:00.395 sys 0m19.528s 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:00.395 15:19:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:00.395 ************************************ 00:15:00.395 END TEST bdev_fio 00:15:00.395 ************************************ 00:15:00.655 15:19:43 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:00.655 15:19:43 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:00.655 15:19:43 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:15:00.655 15:19:43 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:00.655 15:19:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:00.655 ************************************ 00:15:00.655 START TEST bdev_verify 00:15:00.655 ************************************ 00:15:00.655 15:19:43 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:00.655 [2024-10-25 15:19:43.277908] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:15:00.655 [2024-10-25 15:19:43.278038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71772 ] 00:15:00.914 [2024-10-25 15:19:43.461271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:00.914 [2024-10-25 15:19:43.583570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.914 [2024-10-25 15:19:43.583601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.482 Running I/O for 5 seconds... 00:15:03.816 23136.00 IOPS, 90.38 MiB/s [2024-10-25T15:19:47.482Z] 24416.00 IOPS, 95.38 MiB/s [2024-10-25T15:19:48.436Z] 24864.00 IOPS, 97.12 MiB/s [2024-10-25T15:19:49.373Z] 25040.00 IOPS, 97.81 MiB/s [2024-10-25T15:19:49.373Z] 24755.20 IOPS, 96.70 MiB/s 00:15:06.645 Latency(us) 00:15:06.645 [2024-10-25T15:19:49.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.645 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:06.645 Verification LBA range: start 0x0 length 0xa0000 00:15:06.645 nvme0n1 : 5.04 1827.40 7.14 0.00 0.00 69925.95 10580.51 66957.26 00:15:06.645 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:06.645 Verification LBA range: start 0xa0000 length 0xa0000 00:15:06.645 nvme0n1 : 5.05 1901.75 7.43 0.00 0.00 67196.87 6422.00 60640.54 00:15:06.645 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:06.645 Verification LBA range: start 0x0 length 0xbd0bd 00:15:06.645 nvme1n1 : 5.04 2854.76 11.15 0.00 0.00 44568.02 5474.49 52849.91 00:15:06.645 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:06.645 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:06.645 nvme1n1 : 5.05 2826.82 11.04 0.00 0.00 44937.40 4000.59 59798.31 00:15:06.645 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:06.645 Verification LBA range: start 0x0 length 0x80000 00:15:06.645 nvme2n1 : 5.05 1826.35 7.13 0.00 0.00 69654.73 12633.45 57271.62 00:15:06.645 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:06.645 Verification LBA range: start 0x80000 length 0x80000 00:15:06.645 nvme2n1 : 5.06 1922.86 7.51 0.00 0.00 66053.30 12159.69 60640.54 00:15:06.645 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:06.645 Verification LBA range: start 0x0 length 0x80000 00:15:06.645 nvme2n2 : 5.04 1830.02 7.15 0.00 0.00 69387.14 5132.34 62746.11 00:15:06.645 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:06.645 Verification LBA range: start 0x80000 length 0x80000 00:15:06.645 nvme2n2 : 5.04 1928.41 7.53 0.00 0.00 65720.94 13580.95 62325.00 00:15:06.645 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:06.645 Verification LBA range: start 0x0 length 0x80000 00:15:06.645 nvme2n3 : 5.06 1848.31 7.22 0.00 0.00 68586.98 3816.35 63167.23 00:15:06.645 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:06.645 Verification LBA range: start 0x80000 length 0x80000 00:15:06.645 nvme2n3 : 5.07 1945.33 7.60 0.00 0.00 65047.70 2882.00 67378.38 00:15:06.645 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:06.645 Verification LBA range: start 0x0 length 0x20000 00:15:06.645 nvme3n1 : 5.06 1847.57 7.22 0.00 0.00 68504.92 3026.76 69483.95 00:15:06.645 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:06.645 Verification LBA range: start 0x20000 length 0x20000 00:15:06.645 nvme3n1 : 5.07 1944.87 7.60 0.00 0.00 65002.38 3895.31 69062.84 00:15:06.645 [2024-10-25T15:19:49.373Z] =================================================================================================================== 00:15:06.645 [2024-10-25T15:19:49.373Z] Total : 24504.47 95.72 0.00 0.00 62198.84 2882.00 69483.95 00:15:08.025 00:15:08.025 real 0m7.161s 00:15:08.025 user 0m11.013s 00:15:08.025 sys 0m2.032s 00:15:08.025 15:19:50 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:08.025 15:19:50 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:08.025 ************************************ 00:15:08.025 END TEST bdev_verify 00:15:08.025 ************************************ 00:15:08.025 15:19:50 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:08.025 15:19:50 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:15:08.025 15:19:50 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:08.025 15:19:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:08.025 ************************************ 00:15:08.025 START TEST bdev_verify_big_io 00:15:08.025 ************************************ 00:15:08.025 15:19:50 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:08.025 [2024-10-25 15:19:50.505152] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:15:08.026 [2024-10-25 15:19:50.505293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71875 ] 00:15:08.026 [2024-10-25 15:19:50.685940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:08.283 [2024-10-25 15:19:50.804833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.283 [2024-10-25 15:19:50.804870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.850 Running I/O for 5 seconds... 00:15:12.360 1088.00 IOPS, 68.00 MiB/s [2024-10-25T15:19:57.013Z] 1989.00 IOPS, 124.31 MiB/s [2024-10-25T15:19:57.272Z] 2595.33 IOPS, 162.21 MiB/s [2024-10-25T15:19:57.272Z] 2765.00 IOPS, 172.81 MiB/s 00:15:14.544 Latency(us) 00:15:14.544 [2024-10-25T15:19:57.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.544 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:14.544 Verification LBA range: start 0x0 length 0xa000 00:15:14.544 nvme0n1 : 5.76 183.45 11.47 0.00 0.00 670899.38 9369.81 1233024.31 00:15:14.544 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:14.544 Verification LBA range: start 0xa000 length 0xa000 00:15:14.544 nvme0n1 : 5.69 157.59 9.85 0.00 0.00 772501.84 111174.32 838860.80 00:15:14.544 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:14.544 Verification LBA range: start 0x0 length 0xbd0b 00:15:14.544 nvme1n1 : 5.70 159.05 9.94 0.00 0.00 760330.32 37900.34 1266713.50 00:15:14.544 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:14.544 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:14.544 nvme1n1 : 5.71 201.71 12.61 0.00 0.00 606821.85 11843.86 677152.69 00:15:14.544 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:14.544 Verification LBA range: start 0x0 length 0x8000 00:15:14.544 nvme2n1 : 5.77 163.66 10.23 0.00 0.00 733025.80 64009.46 1724886.46 00:15:14.544 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:14.544 Verification LBA range: start 0x8000 length 0x8000 00:15:14.544 nvme2n1 : 5.72 179.17 11.20 0.00 0.00 664357.22 111174.32 805171.61 00:15:14.544 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:14.544 Verification LBA range: start 0x0 length 0x8000 00:15:14.544 nvme2n2 : 5.77 199.61 12.48 0.00 0.00 586399.52 57692.74 1044364.85 00:15:14.544 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:14.544 Verification LBA range: start 0x8000 length 0x8000 00:15:14.544 nvme2n2 : 5.72 165.05 10.32 0.00 0.00 704305.23 117912.16 1428421.60 00:15:14.544 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:14.544 Verification LBA range: start 0x0 length 0x8000 00:15:14.544 nvme2n3 : 5.76 152.76 9.55 0.00 0.00 748369.20 39795.35 1738362.14 00:15:14.544 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:14.544 Verification LBA range: start 0x8000 length 0x8000 00:15:14.544 nvme2n3 : 5.73 142.48 8.90 0.00 0.00 803297.75 16844.59 1489062.14 00:15:14.544 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:14.544 Verification LBA range: start 0x0 length 0x2000 00:15:14.544 nvme3n1 : 5.78 157.83 9.86 0.00 0.00 708024.78 7001.03 1489062.14 00:15:14.544 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:14.544 Verification LBA range: start 0x2000 length 0x2000 00:15:14.544 nvme3n1 : 5.73 153.59 9.60 0.00 0.00 726742.02 18950.17 1286927.01 00:15:14.544 [2024-10-25T15:19:57.272Z] =================================================================================================================== 00:15:14.544 [2024-10-25T15:19:57.272Z] Total : 2015.96 126.00 0.00 0.00 700616.80 7001.03 1738362.14 00:15:15.920 00:15:15.920 real 0m8.233s 00:15:15.920 user 0m14.843s 00:15:15.920 sys 0m0.629s 00:15:15.920 15:19:58 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:15.920 15:19:58 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.920 ************************************ 00:15:15.920 END TEST bdev_verify_big_io 00:15:15.920 ************************************ 00:15:16.181 15:19:58 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:16.181 15:19:58 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:16.181 15:19:58 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:16.181 15:19:58 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:16.181 ************************************ 00:15:16.181 START TEST bdev_write_zeroes 00:15:16.181 ************************************ 00:15:16.181 15:19:58 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:16.181 [2024-10-25 15:19:58.812551] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:15:16.181 [2024-10-25 15:19:58.812692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71989 ] 00:15:16.441 [2024-10-25 15:19:58.997093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.441 [2024-10-25 15:19:59.122001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.010 Running I/O for 1 seconds... 00:15:17.950 61546.00 IOPS, 240.41 MiB/s 00:15:17.950 Latency(us) 00:15:17.950 [2024-10-25T15:20:00.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.950 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:17.950 nvme0n1 : 1.02 9751.89 38.09 0.00 0.00 13113.70 7580.07 34952.53 00:15:17.950 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:17.950 nvme1n1 : 1.03 12090.26 47.23 0.00 0.00 10567.42 4790.18 32004.73 00:15:17.950 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:17.950 nvme2n1 : 1.03 9727.64 38.00 0.00 0.00 13067.60 5658.73 30741.38 00:15:17.950 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:17.950 nvme2n2 : 1.03 9716.43 37.95 0.00 0.00 13067.78 5158.66 30530.83 00:15:17.950 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:17.950 nvme2n3 : 1.03 9705.21 37.91 0.00 0.00 13076.38 5211.30 30109.71 00:15:17.950 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:17.950 nvme3n1 : 1.03 9694.02 37.87 0.00 0.00 13081.67 5316.58 29688.60 00:15:17.950 [2024-10-25T15:20:00.678Z] =================================================================================================================== 00:15:17.950 [2024-10-25T15:20:00.678Z] Total : 60685.44 237.05 0.00 0.00 12581.40 4790.18 34952.53 00:15:19.334 00:15:19.334 real 0m3.140s 00:15:19.334 user 0m2.339s 00:15:19.334 sys 0m0.628s 00:15:19.334 ************************************ 00:15:19.334 END TEST bdev_write_zeroes 00:15:19.334 ************************************ 00:15:19.334 15:20:01 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:19.334 15:20:01 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:19.334 15:20:01 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:19.334 15:20:01 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:19.334 15:20:01 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:19.334 15:20:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:19.334 ************************************ 00:15:19.334 START TEST bdev_json_nonenclosed 00:15:19.334 ************************************ 00:15:19.334 15:20:01 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:19.334 [2024-10-25 15:20:02.037395] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:15:19.334 [2024-10-25 15:20:02.037588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72050 ] 00:15:19.593 [2024-10-25 15:20:02.239554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.851 [2024-10-25 15:20:02.368926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.851 [2024-10-25 15:20:02.369026] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:19.851 [2024-10-25 15:20:02.369050] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:19.851 [2024-10-25 15:20:02.369062] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:20.111 00:15:20.111 real 0m0.715s 00:15:20.111 user 0m0.451s 00:15:20.111 sys 0m0.158s 00:15:20.111 15:20:02 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:20.111 ************************************ 00:15:20.111 15:20:02 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:20.111 END TEST bdev_json_nonenclosed 00:15:20.111 ************************************ 00:15:20.111 15:20:02 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:20.111 15:20:02 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:20.111 15:20:02 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.111 15:20:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:20.111 ************************************ 00:15:20.111 START TEST bdev_json_nonarray 00:15:20.111 ************************************ 00:15:20.111 15:20:02 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:20.111 [2024-10-25 15:20:02.806677] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:15:20.111 [2024-10-25 15:20:02.806822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72076 ] 00:15:20.370 [2024-10-25 15:20:02.991739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.630 [2024-10-25 15:20:03.117184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.630 [2024-10-25 15:20:03.117304] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:20.630 [2024-10-25 15:20:03.117329] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:20.630 [2024-10-25 15:20:03.117341] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:20.888 00:15:20.888 real 0m0.683s 00:15:20.888 user 0m0.420s 00:15:20.888 sys 0m0.157s 00:15:20.888 ************************************ 00:15:20.888 END TEST bdev_json_nonarray 00:15:20.888 ************************************ 00:15:20.888 15:20:03 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:20.888 15:20:03 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:20.888 15:20:03 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:15:20.888 15:20:03 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:15:20.888 15:20:03 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:15:20.888 15:20:03 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:15:20.888 15:20:03 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:15:20.888 15:20:03 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:20.888 15:20:03 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:20.888 15:20:03 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:20.888 15:20:03 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:20.888 15:20:03 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:20.888 15:20:03 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:20.888 15:20:03 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:21.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:26.727 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:26.727 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:26.727 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:26.727 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:26.727 00:15:26.727 real 1m7.051s 00:15:26.727 user 1m39.098s 00:15:26.727 sys 0m41.110s 00:15:26.727 15:20:09 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:26.727 15:20:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:26.727 ************************************ 00:15:26.727 END TEST blockdev_xnvme 00:15:26.727 ************************************ 00:15:26.727 15:20:09 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:26.727 15:20:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:26.727 15:20:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:26.727 15:20:09 -- common/autotest_common.sh@10 -- # set +x 00:15:26.727 ************************************ 00:15:26.727 START TEST ublk 00:15:26.727 ************************************ 00:15:26.727 15:20:09 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:26.986 * Looking for test storage... 00:15:26.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:26.986 15:20:09 ublk -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:15:26.986 15:20:09 ublk -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:15:26.986 15:20:09 ublk -- common/autotest_common.sh@1689 -- # lcov --version 00:15:26.986 15:20:09 ublk -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:15:26.986 15:20:09 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:26.986 15:20:09 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:26.986 15:20:09 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:26.986 15:20:09 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.986 15:20:09 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:15:26.986 15:20:09 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:15:26.986 15:20:09 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:15:26.986 15:20:09 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:15:26.986 15:20:09 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:15:26.986 15:20:09 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:15:26.986 15:20:09 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:26.986 15:20:09 ublk -- scripts/common.sh@344 -- # case "$op" in 00:15:26.986 15:20:09 ublk -- scripts/common.sh@345 -- # : 1 00:15:26.986 15:20:09 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:26.986 15:20:09 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.986 15:20:09 ublk -- scripts/common.sh@365 -- # decimal 1 00:15:26.986 15:20:09 ublk -- scripts/common.sh@353 -- # local d=1 00:15:26.986 15:20:09 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.986 15:20:09 ublk -- scripts/common.sh@355 -- # echo 1 00:15:26.986 15:20:09 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:15:26.986 15:20:09 ublk -- scripts/common.sh@366 -- # decimal 2 00:15:26.986 15:20:09 ublk -- scripts/common.sh@353 -- # local d=2 00:15:26.986 15:20:09 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.986 15:20:09 ublk -- scripts/common.sh@355 -- # echo 2 00:15:26.986 15:20:09 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:15:26.986 15:20:09 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:26.986 15:20:09 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:26.986 15:20:09 ublk -- scripts/common.sh@368 -- # return 0 00:15:26.986 15:20:09 ublk -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.986 15:20:09 ublk -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:15:26.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.986 --rc genhtml_branch_coverage=1 00:15:26.986 --rc genhtml_function_coverage=1 00:15:26.986 --rc genhtml_legend=1 00:15:26.986 --rc geninfo_all_blocks=1 00:15:26.986 --rc geninfo_unexecuted_blocks=1 00:15:26.986 00:15:26.986 ' 00:15:26.987 15:20:09 ublk -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:15:26.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.987 --rc genhtml_branch_coverage=1 00:15:26.987 --rc genhtml_function_coverage=1 00:15:26.987 --rc genhtml_legend=1 00:15:26.987 --rc geninfo_all_blocks=1 00:15:26.987 --rc geninfo_unexecuted_blocks=1 00:15:26.987 00:15:26.987 ' 00:15:26.987 15:20:09 ublk -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:15:26.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.987 --rc genhtml_branch_coverage=1 00:15:26.987 --rc genhtml_function_coverage=1 00:15:26.987 --rc genhtml_legend=1 00:15:26.987 --rc geninfo_all_blocks=1 00:15:26.987 --rc geninfo_unexecuted_blocks=1 00:15:26.987 00:15:26.987 ' 00:15:26.987 15:20:09 ublk -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:15:26.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.987 --rc genhtml_branch_coverage=1 00:15:26.987 --rc genhtml_function_coverage=1 00:15:26.987 --rc genhtml_legend=1 00:15:26.987 --rc geninfo_all_blocks=1 00:15:26.987 --rc geninfo_unexecuted_blocks=1 00:15:26.987 00:15:26.987 ' 00:15:26.987 15:20:09 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:26.987 15:20:09 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:26.987 15:20:09 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:26.987 15:20:09 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:26.987 15:20:09 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:26.987 15:20:09 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:26.987 15:20:09 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:26.987 15:20:09 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:26.987 15:20:09 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:26.987 15:20:09 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:26.987 15:20:09 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:26.987 15:20:09 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:26.987 15:20:09 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:26.987 15:20:09 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:26.987 15:20:09 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:26.987 15:20:09 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:26.987 15:20:09 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:26.987 15:20:09 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:26.987 15:20:09 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:26.987 15:20:09 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:26.987 15:20:09 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:26.987 15:20:09 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:26.987 15:20:09 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:26.987 ************************************ 00:15:26.987 START TEST test_save_ublk_config 00:15:26.987 ************************************ 00:15:26.987 15:20:09 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:15:26.987 15:20:09 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:26.987 15:20:09 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=72378 00:15:26.987 15:20:09 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:26.987 15:20:09 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:26.987 15:20:09 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 72378 00:15:26.987 15:20:09 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 72378 ']' 00:15:26.987 15:20:09 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.987 15:20:09 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:26.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.987 15:20:09 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.987 15:20:09 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:26.987 15:20:09 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:27.244 [2024-10-25 15:20:09.729283] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:15:27.244 [2024-10-25 15:20:09.729910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72378 ] 00:15:27.244 [2024-10-25 15:20:09.914675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.501 [2024-10-25 15:20:10.042656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.440 15:20:11 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:28.440 15:20:11 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:15:28.440 15:20:11 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:28.440 15:20:11 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:28.440 15:20:11 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.440 15:20:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:28.440 [2024-10-25 15:20:11.008235] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:28.440 [2024-10-25 15:20:11.009402] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:28.440 malloc0 00:15:28.440 [2024-10-25 15:20:11.102375] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:28.440 [2024-10-25 15:20:11.102486] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:28.440 [2024-10-25 15:20:11.102501] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:28.440 [2024-10-25 15:20:11.102510] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:28.440 [2024-10-25 15:20:11.111324] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:28.440 [2024-10-25 15:20:11.111352] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:28.440 [2024-10-25 15:20:11.118263] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:28.440 [2024-10-25 15:20:11.118366] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:28.440 [2024-10-25 15:20:11.134260] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:28.440 0 00:15:28.440 15:20:11 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.440 15:20:11 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:28.440 15:20:11 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.440 15:20:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:28.700 15:20:11 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.700 15:20:11 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:28.700 "subsystems": [ 00:15:28.700 { 00:15:28.700 "subsystem": "fsdev", 00:15:28.700 "config": [ 00:15:28.700 { 00:15:28.700 "method": "fsdev_set_opts", 00:15:28.700 "params": { 00:15:28.700 "fsdev_io_pool_size": 65535, 00:15:28.700 "fsdev_io_cache_size": 256 00:15:28.700 } 00:15:28.700 } 00:15:28.700 ] 00:15:28.700 }, 00:15:28.700 { 00:15:28.700 "subsystem": "keyring", 00:15:28.700 "config": [] 00:15:28.700 }, 00:15:28.700 { 00:15:28.700 "subsystem": "iobuf", 00:15:28.700 "config": [ 00:15:28.700 { 00:15:28.700 "method": "iobuf_set_options", 00:15:28.700 "params": { 00:15:28.700 "small_pool_count": 8192, 00:15:28.700 "large_pool_count": 1024, 00:15:28.700 "small_bufsize": 8192, 00:15:28.700 "large_bufsize": 135168, 00:15:28.700 "enable_numa": false 00:15:28.700 } 00:15:28.700 } 00:15:28.700 ] 00:15:28.700 }, 00:15:28.700 { 00:15:28.700 "subsystem": "sock", 00:15:28.700 "config": [ 00:15:28.700 { 00:15:28.700 "method": "sock_set_default_impl", 00:15:28.700 "params": { 00:15:28.700 "impl_name": "posix" 00:15:28.700 } 00:15:28.700 }, 00:15:28.700 { 00:15:28.700 "method": "sock_impl_set_options", 00:15:28.700 "params": { 00:15:28.700 "impl_name": "ssl", 00:15:28.700 "recv_buf_size": 4096, 00:15:28.700 "send_buf_size": 4096, 00:15:28.700 "enable_recv_pipe": true, 00:15:28.700 "enable_quickack": false, 00:15:28.700 "enable_placement_id": 0, 00:15:28.700 "enable_zerocopy_send_server": true, 00:15:28.700 "enable_zerocopy_send_client": false, 00:15:28.700 "zerocopy_threshold": 0, 00:15:28.700 "tls_version": 0, 00:15:28.700 "enable_ktls": false 00:15:28.700 } 00:15:28.700 }, 00:15:28.700 { 00:15:28.700 "method": "sock_impl_set_options", 00:15:28.700 "params": { 00:15:28.700 "impl_name": "posix", 00:15:28.700 "recv_buf_size": 2097152, 00:15:28.700 "send_buf_size": 2097152, 00:15:28.700 "enable_recv_pipe": true, 00:15:28.700 "enable_quickack": false, 00:15:28.700 "enable_placement_id": 0, 00:15:28.700 "enable_zerocopy_send_server": true, 00:15:28.700 "enable_zerocopy_send_client": false, 00:15:28.700 "zerocopy_threshold": 0, 00:15:28.700 "tls_version": 0, 00:15:28.700 "enable_ktls": false 00:15:28.700 } 00:15:28.700 } 00:15:28.700 ] 00:15:28.700 }, 00:15:28.700 { 00:15:28.700 "subsystem": "vmd", 00:15:28.700 "config": [] 00:15:28.700 }, 00:15:28.700 { 00:15:28.700 "subsystem": "accel", 00:15:28.700 "config": [ 00:15:28.700 { 00:15:28.700 "method": "accel_set_options", 00:15:28.700 "params": { 00:15:28.700 "small_cache_size": 128, 00:15:28.700 "large_cache_size": 16, 00:15:28.700 "task_count": 2048, 00:15:28.700 "sequence_count": 2048, 00:15:28.700 "buf_count": 2048 00:15:28.700 } 00:15:28.700 } 00:15:28.700 ] 00:15:28.700 }, 00:15:28.700 { 00:15:28.700 "subsystem": "bdev", 00:15:28.700 "config": [ 00:15:28.700 { 00:15:28.700 "method": "bdev_set_options", 00:15:28.700 "params": { 00:15:28.700 "bdev_io_pool_size": 65535, 00:15:28.700 "bdev_io_cache_size": 256, 00:15:28.700 "bdev_auto_examine": true, 00:15:28.700 "iobuf_small_cache_size": 128, 00:15:28.700 "iobuf_large_cache_size": 16 00:15:28.700 } 00:15:28.700 }, 00:15:28.700 { 00:15:28.700 "method": "bdev_raid_set_options", 00:15:28.700 "params": { 00:15:28.700 "process_window_size_kb": 1024, 00:15:28.700 "process_max_bandwidth_mb_sec": 0 00:15:28.700 } 00:15:28.700 }, 00:15:28.700 { 00:15:28.700 "method": "bdev_iscsi_set_options", 00:15:28.700 "params": { 00:15:28.700 "timeout_sec": 30 00:15:28.700 } 00:15:28.700 }, 00:15:28.700 { 00:15:28.700 "method": "bdev_nvme_set_options", 00:15:28.700 "params": { 00:15:28.700 "action_on_timeout": "none", 00:15:28.700 "timeout_us": 0, 00:15:28.700 "timeout_admin_us": 0, 00:15:28.700 "keep_alive_timeout_ms": 10000, 00:15:28.700 "arbitration_burst": 0, 00:15:28.700 "low_priority_weight": 0, 00:15:28.700 "medium_priority_weight": 0, 00:15:28.700 "high_priority_weight": 0, 00:15:28.700 "nvme_adminq_poll_period_us": 10000, 00:15:28.700 "nvme_ioq_poll_period_us": 0, 00:15:28.700 "io_queue_requests": 0, 00:15:28.700 "delay_cmd_submit": true, 00:15:28.700 "transport_retry_count": 4, 00:15:28.700 "bdev_retry_count": 3, 00:15:28.700 "transport_ack_timeout": 0, 00:15:28.700 "ctrlr_loss_timeout_sec": 0, 00:15:28.700 "reconnect_delay_sec": 0, 00:15:28.700 "fast_io_fail_timeout_sec": 0, 00:15:28.700 "disable_auto_failback": false, 00:15:28.700 "generate_uuids": false, 00:15:28.700 "transport_tos": 0, 00:15:28.700 "nvme_error_stat": false, 00:15:28.700 "rdma_srq_size": 0, 00:15:28.700 "io_path_stat": false, 00:15:28.700 "allow_accel_sequence": false, 00:15:28.700 "rdma_max_cq_size": 0, 00:15:28.700 "rdma_cm_event_timeout_ms": 0, 00:15:28.700 "dhchap_digests": [ 00:15:28.700 "sha256", 00:15:28.700 "sha384", 00:15:28.700 "sha512" 00:15:28.701 ], 00:15:28.701 "dhchap_dhgroups": [ 00:15:28.701 "null", 00:15:28.701 "ffdhe2048", 00:15:28.701 "ffdhe3072", 00:15:28.701 "ffdhe4096", 00:15:28.701 "ffdhe6144", 00:15:28.701 "ffdhe8192" 00:15:28.701 ] 00:15:28.701 } 00:15:28.701 }, 00:15:28.701 { 00:15:28.701 "method": "bdev_nvme_set_hotplug", 00:15:28.701 "params": { 00:15:28.701 "period_us": 100000, 00:15:28.701 "enable": false 00:15:28.701 } 00:15:28.701 }, 00:15:28.701 { 00:15:28.701 "method": "bdev_malloc_create", 00:15:28.701 "params": { 00:15:28.701 "name": "malloc0", 00:15:28.701 "num_blocks": 8192, 00:15:28.701 "block_size": 4096, 00:15:28.701 "physical_block_size": 4096, 00:15:28.701 "uuid": "d3521f79-b5d3-4959-9e75-33cce9d90963", 00:15:28.701 "optimal_io_boundary": 0, 00:15:28.701 "md_size": 0, 00:15:28.701 "dif_type": 0, 00:15:28.701 "dif_is_head_of_md": false, 00:15:28.701 "dif_pi_format": 0 00:15:28.701 } 00:15:28.701 }, 00:15:28.701 { 00:15:28.701 "method": "bdev_wait_for_examine" 00:15:28.701 } 00:15:28.701 ] 00:15:28.701 }, 00:15:28.701 { 00:15:28.701 "subsystem": "scsi", 00:15:28.701 "config": null 00:15:28.701 }, 00:15:28.701 { 00:15:28.701 "subsystem": "scheduler", 00:15:28.701 "config": [ 00:15:28.701 { 00:15:28.701 "method": "framework_set_scheduler", 00:15:28.701 "params": { 00:15:28.701 "name": "static" 00:15:28.701 } 00:15:28.701 } 00:15:28.701 ] 00:15:28.701 }, 00:15:28.701 { 00:15:28.701 "subsystem": "vhost_scsi", 00:15:28.701 "config": [] 00:15:28.701 }, 00:15:28.701 { 00:15:28.701 "subsystem": "vhost_blk", 00:15:28.701 "config": [] 00:15:28.701 }, 00:15:28.701 { 00:15:28.701 "subsystem": "ublk", 00:15:28.701 "config": [ 00:15:28.701 { 00:15:28.701 "method": "ublk_create_target", 00:15:28.701 "params": { 00:15:28.701 "cpumask": "1" 00:15:28.701 } 00:15:28.701 }, 00:15:28.701 { 00:15:28.701 "method": "ublk_start_disk", 00:15:28.701 "params": { 00:15:28.701 "bdev_name": "malloc0", 00:15:28.701 "ublk_id": 0, 00:15:28.701 "num_queues": 1, 00:15:28.701 "queue_depth": 128 00:15:28.701 } 00:15:28.701 } 00:15:28.701 ] 00:15:28.701 }, 00:15:28.701 { 00:15:28.701 "subsystem": "nbd", 00:15:28.701 "config": [] 00:15:28.701 }, 00:15:28.701 { 00:15:28.701 "subsystem": "nvmf", 00:15:28.701 "config": [ 00:15:28.701 { 00:15:28.701 "method": "nvmf_set_config", 00:15:28.701 "params": { 00:15:28.701 "discovery_filter": "match_any", 00:15:28.701 "admin_cmd_passthru": { 00:15:28.701 "identify_ctrlr": false 00:15:28.701 }, 00:15:28.701 "dhchap_digests": [ 00:15:28.701 "sha256", 00:15:28.701 "sha384", 00:15:28.701 "sha512" 00:15:28.701 ], 00:15:28.701 "dhchap_dhgroups": [ 00:15:28.701 "null", 00:15:28.701 "ffdhe2048", 00:15:28.701 "ffdhe3072", 00:15:28.701 "ffdhe4096", 00:15:28.701 "ffdhe6144", 00:15:28.701 "ffdhe8192" 00:15:28.701 ] 00:15:28.701 } 00:15:28.701 }, 00:15:28.701 { 00:15:28.701 "method": "nvmf_set_max_subsystems", 00:15:28.701 "params": { 00:15:28.701 "max_subsystems": 1024 00:15:28.701 } 00:15:28.701 }, 00:15:28.701 { 00:15:28.701 "method": "nvmf_set_crdt", 00:15:28.701 "params": { 00:15:28.701 "crdt1": 0, 00:15:28.701 "crdt2": 0, 00:15:28.701 "crdt3": 0 00:15:28.701 } 00:15:28.701 } 00:15:28.701 ] 00:15:28.701 }, 00:15:28.701 { 00:15:28.701 "subsystem": "iscsi", 00:15:28.701 "config": [ 00:15:28.701 { 00:15:28.701 "method": "iscsi_set_options", 00:15:28.701 "params": { 00:15:28.701 "node_base": "iqn.2016-06.io.spdk", 00:15:28.701 "max_sessions": 128, 00:15:28.701 "max_connections_per_session": 2, 00:15:28.701 "max_queue_depth": 64, 00:15:28.701 "default_time2wait": 2, 00:15:28.701 "default_time2retain": 20, 00:15:28.701 "first_burst_length": 8192, 00:15:28.701 "immediate_data": true, 00:15:28.701 "allow_duplicated_isid": false, 00:15:28.701 "error_recovery_level": 0, 00:15:28.701 "nop_timeout": 60, 00:15:28.701 "nop_in_interval": 30, 00:15:28.701 "disable_chap": false, 00:15:28.701 "require_chap": false, 00:15:28.701 "mutual_chap": false, 00:15:28.701 "chap_group": 0, 00:15:28.701 "max_large_datain_per_connection": 64, 00:15:28.701 "max_r2t_per_connection": 4, 00:15:28.701 "pdu_pool_size": 36864, 00:15:28.701 "immediate_data_pool_size": 16384, 00:15:28.701 "data_out_pool_size": 2048 00:15:28.701 } 00:15:28.701 } 00:15:28.701 ] 00:15:28.701 } 00:15:28.701 ] 00:15:28.701 }' 00:15:28.701 15:20:11 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 72378 00:15:28.701 15:20:11 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 72378 ']' 00:15:28.701 15:20:11 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 72378 00:15:28.701 15:20:11 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:15:28.701 15:20:11 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.701 15:20:11 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72378 00:15:28.961 15:20:11 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:28.961 15:20:11 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:28.961 15:20:11 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72378' 00:15:28.961 killing process with pid 72378 00:15:28.961 15:20:11 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 72378 00:15:28.961 15:20:11 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 72378 00:15:30.866 [2024-10-25 15:20:13.428666] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:30.866 [2024-10-25 15:20:13.469325] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:30.866 [2024-10-25 15:20:13.469489] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:30.866 [2024-10-25 15:20:13.474270] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:30.866 [2024-10-25 15:20:13.474322] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:30.866 [2024-10-25 15:20:13.474339] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:30.866 [2024-10-25 15:20:13.474367] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:30.866 [2024-10-25 15:20:13.474521] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:32.773 15:20:15 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=72449 00:15:32.773 15:20:15 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 72449 00:15:32.773 15:20:15 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 72449 ']' 00:15:32.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.773 15:20:15 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.773 15:20:15 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.773 15:20:15 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.773 15:20:15 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.773 15:20:15 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:32.773 15:20:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:32.773 15:20:15 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:32.773 "subsystems": [ 00:15:32.773 { 00:15:32.773 "subsystem": "fsdev", 00:15:32.773 "config": [ 00:15:32.773 { 00:15:32.773 "method": "fsdev_set_opts", 00:15:32.773 "params": { 00:15:32.773 "fsdev_io_pool_size": 65535, 00:15:32.773 "fsdev_io_cache_size": 256 00:15:32.773 } 00:15:32.773 } 00:15:32.773 ] 00:15:32.773 }, 00:15:32.773 { 00:15:32.773 "subsystem": "keyring", 00:15:32.773 "config": [] 00:15:32.773 }, 00:15:32.773 { 00:15:32.773 "subsystem": "iobuf", 00:15:32.773 "config": [ 00:15:32.773 { 00:15:32.773 "method": "iobuf_set_options", 00:15:32.773 "params": { 00:15:32.773 "small_pool_count": 8192, 00:15:32.773 "large_pool_count": 1024, 00:15:32.773 "small_bufsize": 8192, 00:15:32.773 "large_bufsize": 135168, 00:15:32.773 "enable_numa": false 00:15:32.773 } 00:15:32.773 } 00:15:32.773 ] 00:15:32.773 }, 00:15:32.773 { 00:15:32.773 "subsystem": "sock", 00:15:32.773 "config": [ 00:15:32.773 { 00:15:32.773 "method": "sock_set_default_impl", 00:15:32.773 "params": { 00:15:32.773 "impl_name": "posix" 00:15:32.773 } 00:15:32.773 }, 00:15:32.773 { 00:15:32.773 "method": "sock_impl_set_options", 00:15:32.773 "params": { 00:15:32.773 "impl_name": "ssl", 00:15:32.773 "recv_buf_size": 4096, 00:15:32.773 "send_buf_size": 4096, 00:15:32.773 "enable_recv_pipe": true, 00:15:32.773 "enable_quickack": false, 00:15:32.773 "enable_placement_id": 0, 00:15:32.773 "enable_zerocopy_send_server": true, 00:15:32.773 "enable_zerocopy_send_client": false, 00:15:32.773 "zerocopy_threshold": 0, 00:15:32.773 "tls_version": 0, 00:15:32.773 "enable_ktls": false 00:15:32.773 } 00:15:32.773 }, 00:15:32.773 { 00:15:32.773 "method": "sock_impl_set_options", 00:15:32.773 "params": { 00:15:32.773 "impl_name": "posix", 00:15:32.773 "recv_buf_size": 2097152, 00:15:32.773 "send_buf_size": 2097152, 00:15:32.773 "enable_recv_pipe": true, 00:15:32.773 "enable_quickack": false, 00:15:32.773 "enable_placement_id": 0, 00:15:32.773 "enable_zerocopy_send_server": true, 00:15:32.773 "enable_zerocopy_send_client": false, 00:15:32.773 "zerocopy_threshold": 0, 00:15:32.773 "tls_version": 0, 00:15:32.773 "enable_ktls": false 00:15:32.773 } 00:15:32.773 } 00:15:32.773 ] 00:15:32.773 }, 00:15:32.773 { 00:15:32.773 "subsystem": "vmd", 00:15:32.773 "config": [] 00:15:32.773 }, 00:15:32.773 { 00:15:32.773 "subsystem": "accel", 00:15:32.773 "config": [ 00:15:32.773 { 00:15:32.773 "method": "accel_set_options", 00:15:32.773 "params": { 00:15:32.773 "small_cache_size": 128, 00:15:32.773 "large_cache_size": 16, 00:15:32.773 "task_count": 2048, 00:15:32.773 "sequence_count": 2048, 00:15:32.773 "buf_count": 2048 00:15:32.773 } 00:15:32.773 } 00:15:32.773 ] 00:15:32.773 }, 00:15:32.773 { 00:15:32.773 "subsystem": "bdev", 00:15:32.773 "config": [ 00:15:32.773 { 00:15:32.773 "method": "bdev_set_options", 00:15:32.773 "params": { 00:15:32.773 "bdev_io_pool_size": 65535, 00:15:32.773 "bdev_io_cache_size": 256, 00:15:32.773 "bdev_auto_examine": true, 00:15:32.773 "iobuf_small_cache_size": 128, 00:15:32.773 "iobuf_large_cache_size": 16 00:15:32.773 } 00:15:32.773 }, 00:15:32.773 { 00:15:32.773 "method": "bdev_raid_set_options", 00:15:32.773 "params": { 00:15:32.773 "process_window_size_kb": 1024, 00:15:32.773 "process_max_bandwidth_mb_sec": 0 00:15:32.773 } 00:15:32.773 }, 00:15:32.773 { 00:15:32.773 "method": "bdev_iscsi_set_options", 00:15:32.773 "params": { 00:15:32.773 "timeout_sec": 30 00:15:32.773 } 00:15:32.773 }, 00:15:32.773 { 00:15:32.773 "method": "bdev_nvme_set_options", 00:15:32.773 "params": { 00:15:32.773 "action_on_timeout": "none", 00:15:32.773 "timeout_us": 0, 00:15:32.773 "timeout_admin_us": 0, 00:15:32.773 "keep_alive_timeout_ms": 10000, 00:15:32.773 "arbitration_burst": 0, 00:15:32.773 "low_priority_weight": 0, 00:15:32.773 "medium_priority_weight": 0, 00:15:32.773 "high_priority_weight": 0, 00:15:32.773 "nvme_adminq_poll_period_us": 10000, 00:15:32.773 "nvme_ioq_poll_period_us": 0, 00:15:32.773 "io_queue_requests": 0, 00:15:32.773 "delay_cmd_submit": true, 00:15:32.773 "transport_retry_count": 4, 00:15:32.773 "bdev_retry_count": 3, 00:15:32.773 "transport_ack_timeout": 0, 00:15:32.773 "ctrlr_loss_timeout_sec": 0, 00:15:32.773 "reconnect_delay_sec": 0, 00:15:32.773 "fast_io_fail_timeout_sec": 0, 00:15:32.773 "disable_auto_failback": false, 00:15:32.773 "generate_uuids": false, 00:15:32.773 "transport_tos": 0, 00:15:32.773 "nvme_error_stat": false, 00:15:32.773 "rdma_srq_size": 0, 00:15:32.773 "io_path_stat": false, 00:15:32.773 "allow_accel_sequence": false, 00:15:32.773 "rdma_max_cq_size": 0, 00:15:32.773 "rdma_cm_event_timeout_ms": 0, 00:15:32.773 "dhchap_digests": [ 00:15:32.773 "sha256", 00:15:32.773 "sha384", 00:15:32.773 "sha512" 00:15:32.773 ], 00:15:32.773 "dhchap_dhgroups": [ 00:15:32.773 "null", 00:15:32.773 "ffdhe2048", 00:15:32.773 "ffdhe3072", 00:15:32.773 "ffdhe4096", 00:15:32.773 "ffdhe6144", 00:15:32.773 "ffdhe8192" 00:15:32.773 ] 00:15:32.773 } 00:15:32.773 }, 00:15:32.773 { 00:15:32.773 "method": "bdev_nvme_set_hotplug", 00:15:32.773 "params": { 00:15:32.773 "period_us": 100000, 00:15:32.773 "enable": false 00:15:32.773 } 00:15:32.773 }, 00:15:32.773 { 00:15:32.773 "method": "bdev_malloc_create", 00:15:32.773 "params": { 00:15:32.773 "name": "malloc0", 00:15:32.774 "num_blocks": 8192, 00:15:32.774 "block_size": 4096, 00:15:32.774 "physical_block_size": 4096, 00:15:32.774 "uuid": "d3521f79-b5d3-4959-9e75-33cce9d90963", 00:15:32.774 "optimal_io_boundary": 0, 00:15:32.774 "md_size": 0, 00:15:32.774 "dif_type": 0, 00:15:32.774 "dif_is_head_of_md": false, 00:15:32.774 "dif_pi_format": 0 00:15:32.774 } 00:15:32.774 }, 00:15:32.774 { 00:15:32.774 "method": "bdev_wait_for_examine" 00:15:32.774 } 00:15:32.774 ] 00:15:32.774 }, 00:15:32.774 { 00:15:32.774 "subsystem": "scsi", 00:15:32.774 "config": null 00:15:32.774 }, 00:15:32.774 { 00:15:32.774 "subsystem": "scheduler", 00:15:32.774 "config": [ 00:15:32.774 { 00:15:32.774 "method": "framework_set_scheduler", 00:15:32.774 "params": { 00:15:32.774 "name": "static" 00:15:32.774 } 00:15:32.774 } 00:15:32.774 ] 00:15:32.774 }, 00:15:32.774 { 00:15:32.774 "subsystem": "vhost_scsi", 00:15:32.774 "config": [] 00:15:32.774 }, 00:15:32.774 { 00:15:32.774 "subsystem": "vhost_blk", 00:15:32.774 "config": [] 00:15:32.774 }, 00:15:32.774 { 00:15:32.774 "subsystem": "ublk", 00:15:32.774 "config": [ 00:15:32.774 { 00:15:32.774 "method": "ublk_create_target", 00:15:32.774 "params": { 00:15:32.774 "cpumask": "1" 00:15:32.774 } 00:15:32.774 }, 00:15:32.774 { 00:15:32.774 "method": "ublk_start_disk", 00:15:32.774 "params": { 00:15:32.774 "bdev_name": "malloc0", 00:15:32.774 "ublk_id": 0, 00:15:32.774 "num_queues": 1, 00:15:32.774 "queue_depth": 128 00:15:32.774 } 00:15:32.774 } 00:15:32.774 ] 00:15:32.774 }, 00:15:32.774 { 00:15:32.774 "subsystem": "nbd", 00:15:32.774 "config": [] 00:15:32.774 }, 00:15:32.774 { 00:15:32.774 "subsystem": "nvmf", 00:15:32.774 "config": [ 00:15:32.774 { 00:15:32.774 "method": "nvmf_set_config", 00:15:32.774 "params": { 00:15:32.774 "discovery_filter": "match_any", 00:15:32.774 "admin_cmd_passthru": { 00:15:32.774 "identify_ctrlr": false 00:15:32.774 }, 00:15:32.774 "dhchap_digests": [ 00:15:32.774 "sha256", 00:15:32.774 "sha384", 00:15:32.774 "sha512" 00:15:32.774 ], 00:15:32.774 "dhchap_dhgroups": [ 00:15:32.774 "null", 00:15:32.774 "ffdhe2048", 00:15:32.774 "ffdhe3072", 00:15:32.774 "ffdhe4096", 00:15:32.774 "ffdhe6144", 00:15:32.774 "ffdhe8192" 00:15:32.774 ] 00:15:32.774 } 00:15:32.774 }, 00:15:32.774 { 00:15:32.774 "method": "nvmf_set_max_subsystems", 00:15:32.774 "params": { 00:15:32.774 "max_subsystems": 1024 00:15:32.774 } 00:15:32.774 }, 00:15:32.774 { 00:15:32.774 "method": "nvmf_set_crdt", 00:15:32.774 "params": { 00:15:32.774 "crdt1": 0, 00:15:32.774 "crdt2": 0, 00:15:32.774 "crdt3": 0 00:15:32.774 } 00:15:32.774 } 00:15:32.774 ] 00:15:32.774 }, 00:15:32.774 { 00:15:32.774 "subsystem": "iscsi", 00:15:32.774 "config": [ 00:15:32.774 { 00:15:32.774 "method": "iscsi_set_options", 00:15:32.774 "params": { 00:15:32.774 "node_base": "iqn.2016-06.io.spdk", 00:15:32.774 "max_sessions": 128, 00:15:32.774 "max_connections_per_session": 2, 00:15:32.774 "max_queue_depth": 64, 00:15:32.774 "default_time2wait": 2, 00:15:32.774 "default_time2retain": 20, 00:15:32.774 "first_burst_length": 8192, 00:15:32.774 "immediate_data": true, 00:15:32.774 "allow_duplicated_isid": false, 00:15:32.774 "error_recovery_level": 0, 00:15:32.774 "nop_timeout": 60, 00:15:32.774 "nop_in_interval": 30, 00:15:32.774 "disable_chap": false, 00:15:32.774 "require_chap": false, 00:15:32.774 "mutual_chap": false, 00:15:32.774 "chap_group": 0, 00:15:32.774 "max_large_datain_per_connection": 64, 00:15:32.774 "max_r2t_per_connection": 4, 00:15:32.774 "pdu_pool_size": 36864, 00:15:32.774 "immediate_data_pool_size": 16384, 00:15:32.774 "data_out_pool_size": 2048 00:15:32.774 } 00:15:32.774 } 00:15:32.774 ] 00:15:32.774 } 00:15:32.774 ] 00:15:32.774 }' 00:15:33.033 [2024-10-25 15:20:15.546381] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:15:33.033 [2024-10-25 15:20:15.546515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72449 ] 00:15:33.033 [2024-10-25 15:20:15.732569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.292 [2024-10-25 15:20:15.858116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.670 [2024-10-25 15:20:16.969237] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:34.670 [2024-10-25 15:20:16.970339] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:34.670 [2024-10-25 15:20:16.977369] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:34.670 [2024-10-25 15:20:16.977454] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:34.670 [2024-10-25 15:20:16.977467] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:34.670 [2024-10-25 15:20:16.977475] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:34.670 [2024-10-25 15:20:16.986324] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:34.670 [2024-10-25 15:20:16.986350] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:34.670 [2024-10-25 15:20:16.993210] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:34.670 [2024-10-25 15:20:16.993308] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:34.670 [2024-10-25 15:20:17.010275] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:34.670 15:20:17 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:34.670 15:20:17 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:15:34.670 15:20:17 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:34.670 15:20:17 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:34.670 15:20:17 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.670 15:20:17 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:34.670 15:20:17 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.670 15:20:17 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:34.670 15:20:17 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:34.670 15:20:17 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 72449 00:15:34.670 15:20:17 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 72449 ']' 00:15:34.670 15:20:17 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 72449 00:15:34.670 15:20:17 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:15:34.670 15:20:17 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:34.670 15:20:17 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72449 00:15:34.670 15:20:17 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:34.670 15:20:17 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:34.670 15:20:17 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72449' 00:15:34.670 killing process with pid 72449 00:15:34.670 15:20:17 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 72449 00:15:34.670 15:20:17 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 72449 00:15:36.575 [2024-10-25 15:20:18.804167] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:36.575 [2024-10-25 15:20:18.850276] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:36.575 [2024-10-25 15:20:18.850438] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:36.575 [2024-10-25 15:20:18.861248] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:36.575 [2024-10-25 15:20:18.861317] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:36.575 [2024-10-25 15:20:18.861327] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:36.575 [2024-10-25 15:20:18.861361] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:36.575 [2024-10-25 15:20:18.861515] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:38.479 15:20:20 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:38.479 00:15:38.479 real 0m11.212s 00:15:38.479 user 0m8.490s 00:15:38.479 sys 0m3.596s 00:15:38.479 15:20:20 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:38.479 ************************************ 00:15:38.479 END TEST test_save_ublk_config 00:15:38.479 15:20:20 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:38.479 ************************************ 00:15:38.479 15:20:20 ublk -- ublk/ublk.sh@139 -- # spdk_pid=72540 00:15:38.479 15:20:20 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:38.479 15:20:20 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:38.479 15:20:20 ublk -- ublk/ublk.sh@141 -- # waitforlisten 72540 00:15:38.479 15:20:20 ublk -- common/autotest_common.sh@831 -- # '[' -z 72540 ']' 00:15:38.479 15:20:20 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.479 15:20:20 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:38.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.479 15:20:20 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.479 15:20:20 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:38.479 15:20:20 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.479 [2024-10-25 15:20:20.995328] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:15:38.479 [2024-10-25 15:20:20.995453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72540 ] 00:15:38.479 [2024-10-25 15:20:21.179061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:38.738 [2024-10-25 15:20:21.307140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.739 [2024-10-25 15:20:21.307214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.676 15:20:22 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:39.676 15:20:22 ublk -- common/autotest_common.sh@864 -- # return 0 00:15:39.676 15:20:22 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:39.676 15:20:22 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:39.676 15:20:22 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:39.676 15:20:22 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.676 ************************************ 00:15:39.676 START TEST test_create_ublk 00:15:39.676 ************************************ 00:15:39.676 15:20:22 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:15:39.676 15:20:22 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:39.676 15:20:22 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.676 15:20:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.676 [2024-10-25 15:20:22.262269] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:39.676 [2024-10-25 15:20:22.269210] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:39.676 15:20:22 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.676 15:20:22 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:39.676 15:20:22 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:39.676 15:20:22 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.676 15:20:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.936 15:20:22 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.936 15:20:22 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:39.936 15:20:22 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:39.936 15:20:22 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.936 15:20:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.936 [2024-10-25 15:20:22.574416] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:39.936 [2024-10-25 15:20:22.574888] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:39.936 [2024-10-25 15:20:22.574910] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:39.936 [2024-10-25 15:20:22.574919] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:39.936 [2024-10-25 15:20:22.582237] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:39.936 [2024-10-25 15:20:22.582267] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:39.936 [2024-10-25 15:20:22.590222] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:39.936 [2024-10-25 15:20:22.600267] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:39.936 [2024-10-25 15:20:22.611350] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:39.936 15:20:22 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.936 15:20:22 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:39.936 15:20:22 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:39.936 15:20:22 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:39.936 15:20:22 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.936 15:20:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:39.936 15:20:22 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.936 15:20:22 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:39.936 { 00:15:39.936 "ublk_device": "/dev/ublkb0", 00:15:39.936 "id": 0, 00:15:39.936 "queue_depth": 512, 00:15:39.936 "num_queues": 4, 00:15:39.936 "bdev_name": "Malloc0" 00:15:39.936 } 00:15:39.936 ]' 00:15:39.936 15:20:22 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:40.195 15:20:22 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:40.195 15:20:22 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:40.195 15:20:22 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:40.195 15:20:22 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:40.195 15:20:22 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:40.195 15:20:22 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:40.195 15:20:22 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:40.195 15:20:22 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:40.195 15:20:22 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:40.195 15:20:22 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:40.195 15:20:22 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:40.195 15:20:22 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:40.195 15:20:22 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:40.195 15:20:22 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:40.195 15:20:22 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:40.195 15:20:22 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:40.195 15:20:22 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:40.195 15:20:22 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:40.195 15:20:22 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:40.196 15:20:22 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:40.196 15:20:22 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:40.455 fio: verification read phase will never start because write phase uses all of runtime 00:15:40.455 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:40.455 fio-3.35 00:15:40.455 Starting 1 process 00:15:50.431 00:15:50.431 fio_test: (groupid=0, jobs=1): err= 0: pid=72595: Fri Oct 25 15:20:33 2024 00:15:50.431 write: IOPS=13.6k, BW=53.1MiB/s (55.7MB/s)(531MiB/10001msec); 0 zone resets 00:15:50.431 clat (usec): min=53, max=7885, avg=72.62, stdev=138.39 00:15:50.431 lat (usec): min=53, max=7889, avg=73.16, stdev=138.42 00:15:50.431 clat percentiles (usec): 00:15:50.431 | 1.00th=[ 59], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 63], 00:15:50.431 | 30.00th=[ 63], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 66], 00:15:50.431 | 70.00th=[ 67], 80.00th=[ 69], 90.00th=[ 74], 95.00th=[ 79], 00:15:50.431 | 99.00th=[ 97], 99.50th=[ 116], 99.90th=[ 3032], 99.95th=[ 3458], 00:15:50.431 | 99.99th=[ 3982] 00:15:50.431 bw ( KiB/s): min=17280, max=56872, per=99.85%, avg=54290.79, stdev=8994.83, samples=19 00:15:50.431 iops : min= 4320, max=14218, avg=13572.68, stdev=2248.70, samples=19 00:15:50.431 lat (usec) : 100=99.11%, 250=0.60%, 500=0.01%, 750=0.02%, 1000=0.02% 00:15:50.431 lat (msec) : 2=0.07%, 4=0.16%, 10=0.01% 00:15:50.431 cpu : usr=3.30%, sys=10.08%, ctx=135950, majf=0, minf=797 00:15:50.431 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:50.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.431 issued rwts: total=0,135947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.431 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:50.431 00:15:50.431 Run status group 0 (all jobs): 00:15:50.431 WRITE: bw=53.1MiB/s (55.7MB/s), 53.1MiB/s-53.1MiB/s (55.7MB/s-55.7MB/s), io=531MiB (557MB), run=10001-10001msec 00:15:50.431 00:15:50.431 Disk stats (read/write): 00:15:50.431 ublkb0: ios=0/134470, merge=0/0, ticks=0/8625, in_queue=8626, util=99.00% 00:15:50.431 15:20:33 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:50.431 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.431 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:50.431 [2024-10-25 15:20:33.103522] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:50.431 [2024-10-25 15:20:33.140689] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:50.431 [2024-10-25 15:20:33.141639] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:50.431 [2024-10-25 15:20:33.148246] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:50.431 [2024-10-25 15:20:33.148572] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:50.431 [2024-10-25 15:20:33.148593] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:50.431 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.431 15:20:33 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:50.431 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:15:50.431 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:50.431 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:50.689 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.689 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:50.689 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.689 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:15:50.689 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.689 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:50.689 [2024-10-25 15:20:33.171332] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:50.690 request: 00:15:50.690 { 00:15:50.690 "ublk_id": 0, 00:15:50.690 "method": "ublk_stop_disk", 00:15:50.690 "req_id": 1 00:15:50.690 } 00:15:50.690 Got JSON-RPC error response 00:15:50.690 response: 00:15:50.690 { 00:15:50.690 "code": -19, 00:15:50.690 "message": "No such device" 00:15:50.690 } 00:15:50.690 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:50.690 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:15:50.690 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:50.690 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:50.690 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:50.690 15:20:33 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:50.690 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.690 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:50.690 [2024-10-25 15:20:33.191333] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:50.690 [2024-10-25 15:20:33.204196] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:50.690 [2024-10-25 15:20:33.204259] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:50.690 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.690 15:20:33 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:50.690 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.690 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:51.296 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.296 15:20:33 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:51.296 15:20:33 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:51.296 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.296 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:51.296 15:20:33 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.296 15:20:33 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:51.296 15:20:33 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:15:51.296 15:20:34 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:51.296 15:20:34 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:51.296 15:20:34 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.297 15:20:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:51.556 15:20:34 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.556 15:20:34 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:51.556 15:20:34 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:15:51.556 15:20:34 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:51.556 00:15:51.556 real 0m11.825s 00:15:51.556 user 0m0.730s 00:15:51.556 sys 0m1.131s 00:15:51.556 15:20:34 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:51.556 ************************************ 00:15:51.556 END TEST test_create_ublk 00:15:51.556 ************************************ 00:15:51.556 15:20:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:51.556 15:20:34 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:51.556 15:20:34 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:51.556 15:20:34 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:51.556 15:20:34 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:51.556 ************************************ 00:15:51.556 START TEST test_create_multi_ublk 00:15:51.556 ************************************ 00:15:51.556 15:20:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:15:51.556 15:20:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:51.556 15:20:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.556 15:20:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:51.556 [2024-10-25 15:20:34.147221] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:51.556 [2024-10-25 15:20:34.149926] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:51.556 15:20:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.556 15:20:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:15:51.556 15:20:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:15:51.556 15:20:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:51.556 15:20:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:51.556 15:20:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.556 15:20:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:51.815 15:20:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.815 15:20:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:51.815 15:20:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:51.815 15:20:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.815 15:20:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:51.815 [2024-10-25 15:20:34.440364] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:51.815 [2024-10-25 15:20:34.440828] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:51.815 [2024-10-25 15:20:34.440845] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:51.815 [2024-10-25 15:20:34.440860] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:51.815 [2024-10-25 15:20:34.448219] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:51.815 [2024-10-25 15:20:34.448248] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:51.815 [2024-10-25 15:20:34.456204] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:51.815 [2024-10-25 15:20:34.456826] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:51.815 [2024-10-25 15:20:34.487224] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:51.815 15:20:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.815 15:20:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:51.815 15:20:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:51.815 15:20:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:51.815 15:20:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.815 15:20:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:52.074 15:20:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.074 15:20:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:52.074 15:20:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:52.074 15:20:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.074 15:20:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:52.074 [2024-10-25 15:20:34.775351] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:52.074 [2024-10-25 15:20:34.775805] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:52.074 [2024-10-25 15:20:34.775825] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:52.074 [2024-10-25 15:20:34.775834] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:52.074 [2024-10-25 15:20:34.786282] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:52.074 [2024-10-25 15:20:34.786308] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:52.074 [2024-10-25 15:20:34.794217] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:52.074 [2024-10-25 15:20:34.794826] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:52.334 [2024-10-25 15:20:34.807246] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:52.334 15:20:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.334 15:20:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:52.334 15:20:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:52.334 15:20:34 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:52.334 15:20:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.334 15:20:34 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:52.594 15:20:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.594 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:52.594 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:52.594 15:20:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.594 15:20:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:52.594 [2024-10-25 15:20:35.106352] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:52.594 [2024-10-25 15:20:35.106852] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:52.594 [2024-10-25 15:20:35.106871] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:52.594 [2024-10-25 15:20:35.106882] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:52.594 [2024-10-25 15:20:35.114228] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:52.594 [2024-10-25 15:20:35.114260] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:52.594 [2024-10-25 15:20:35.122215] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:52.594 [2024-10-25 15:20:35.122829] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:52.594 [2024-10-25 15:20:35.125794] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:52.594 15:20:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.594 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:52.594 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:52.594 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:52.594 15:20:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.594 15:20:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:52.853 15:20:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.853 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:52.853 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:52.853 15:20:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.853 15:20:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:52.853 [2024-10-25 15:20:35.426355] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:52.853 [2024-10-25 15:20:35.426827] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:52.853 [2024-10-25 15:20:35.426848] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:52.853 [2024-10-25 15:20:35.426857] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:52.853 [2024-10-25 15:20:35.434228] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:52.853 [2024-10-25 15:20:35.434252] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:52.853 [2024-10-25 15:20:35.442259] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:52.853 [2024-10-25 15:20:35.442877] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:52.853 [2024-10-25 15:20:35.458223] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:52.853 15:20:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.853 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:52.853 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:52.853 15:20:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.853 15:20:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:52.853 15:20:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.853 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:52.853 { 00:15:52.853 "ublk_device": "/dev/ublkb0", 00:15:52.853 "id": 0, 00:15:52.853 "queue_depth": 512, 00:15:52.853 "num_queues": 4, 00:15:52.853 "bdev_name": "Malloc0" 00:15:52.853 }, 00:15:52.853 { 00:15:52.853 "ublk_device": "/dev/ublkb1", 00:15:52.853 "id": 1, 00:15:52.853 "queue_depth": 512, 00:15:52.853 "num_queues": 4, 00:15:52.853 "bdev_name": "Malloc1" 00:15:52.853 }, 00:15:52.853 { 00:15:52.853 "ublk_device": "/dev/ublkb2", 00:15:52.853 "id": 2, 00:15:52.853 "queue_depth": 512, 00:15:52.853 "num_queues": 4, 00:15:52.853 "bdev_name": "Malloc2" 00:15:52.853 }, 00:15:52.853 { 00:15:52.853 "ublk_device": "/dev/ublkb3", 00:15:52.853 "id": 3, 00:15:52.853 "queue_depth": 512, 00:15:52.853 "num_queues": 4, 00:15:52.853 "bdev_name": "Malloc3" 00:15:52.853 } 00:15:52.853 ]' 00:15:52.853 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:15:52.853 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:52.853 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:52.853 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:52.853 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:53.112 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:53.112 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:53.112 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:53.112 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:53.112 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:53.112 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:53.112 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:53.112 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:53.112 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:53.112 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:53.112 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:53.112 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:53.112 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:53.372 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:53.372 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:53.372 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:53.372 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:53.372 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:53.372 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:53.372 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:53.372 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:53.372 15:20:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:53.372 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:53.372 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:53.372 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:53.372 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:53.632 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:53.632 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:53.632 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:53.632 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:53.632 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:53.632 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:53.632 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:53.632 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:53.632 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:53.632 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:53.632 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.891 [2024-10-25 15:20:36.422334] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:53.891 [2024-10-25 15:20:36.466250] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:53.891 [2024-10-25 15:20:36.467210] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:53.891 [2024-10-25 15:20:36.468387] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:53.891 [2024-10-25 15:20:36.468689] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:53.891 [2024-10-25 15:20:36.468710] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.891 [2024-10-25 15:20:36.479329] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:53.891 [2024-10-25 15:20:36.517286] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:53.891 [2024-10-25 15:20:36.518167] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:53.891 [2024-10-25 15:20:36.525226] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:53.891 [2024-10-25 15:20:36.525523] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:53.891 [2024-10-25 15:20:36.525543] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.891 [2024-10-25 15:20:36.539315] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:53.891 [2024-10-25 15:20:36.572644] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:53.891 [2024-10-25 15:20:36.573620] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:53.891 [2024-10-25 15:20:36.580236] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:53.891 [2024-10-25 15:20:36.580529] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:53.891 [2024-10-25 15:20:36.580547] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.891 15:20:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.891 [2024-10-25 15:20:36.603309] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:54.150 [2024-10-25 15:20:36.638216] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:54.150 [2024-10-25 15:20:36.639016] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:54.150 [2024-10-25 15:20:36.647238] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:54.150 [2024-10-25 15:20:36.647522] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:54.150 [2024-10-25 15:20:36.647540] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:54.150 15:20:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.150 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:54.409 [2024-10-25 15:20:36.895304] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:54.409 [2024-10-25 15:20:36.903198] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:54.409 [2024-10-25 15:20:36.903256] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:54.409 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:15:54.409 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:54.409 15:20:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:54.409 15:20:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.409 15:20:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:54.978 15:20:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.978 15:20:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:54.978 15:20:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:54.978 15:20:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.978 15:20:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:55.546 15:20:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.546 15:20:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:55.546 15:20:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:55.546 15:20:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.546 15:20:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:55.805 15:20:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.805 15:20:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:55.805 15:20:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:55.805 15:20:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.805 15:20:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:56.064 15:20:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.064 15:20:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:56.064 15:20:38 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:56.064 15:20:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.064 15:20:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:56.064 15:20:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.064 15:20:38 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:56.064 15:20:38 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:15:56.323 15:20:38 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:56.323 15:20:38 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:56.323 15:20:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.323 15:20:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:56.323 15:20:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.323 15:20:38 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:56.323 15:20:38 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:15:56.323 ************************************ 00:15:56.323 END TEST test_create_multi_ublk 00:15:56.323 ************************************ 00:15:56.323 15:20:38 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:56.323 00:15:56.323 real 0m4.750s 00:15:56.323 user 0m1.132s 00:15:56.323 sys 0m0.218s 00:15:56.323 15:20:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:56.323 15:20:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:56.323 15:20:38 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:56.323 15:20:38 ublk -- ublk/ublk.sh@147 -- # cleanup 00:15:56.323 15:20:38 ublk -- ublk/ublk.sh@130 -- # killprocess 72540 00:15:56.323 15:20:38 ublk -- common/autotest_common.sh@950 -- # '[' -z 72540 ']' 00:15:56.323 15:20:38 ublk -- common/autotest_common.sh@954 -- # kill -0 72540 00:15:56.323 15:20:38 ublk -- common/autotest_common.sh@955 -- # uname 00:15:56.323 15:20:38 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:56.323 15:20:38 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72540 00:15:56.323 killing process with pid 72540 00:15:56.323 15:20:38 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:56.323 15:20:38 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:56.323 15:20:38 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72540' 00:15:56.323 15:20:38 ublk -- common/autotest_common.sh@969 -- # kill 72540 00:15:56.323 15:20:38 ublk -- common/autotest_common.sh@974 -- # wait 72540 00:15:57.703 [2024-10-25 15:20:40.200355] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:57.703 [2024-10-25 15:20:40.200419] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:59.109 00:15:59.109 real 0m32.149s 00:15:59.109 user 0m45.774s 00:15:59.109 sys 0m10.763s 00:15:59.109 ************************************ 00:15:59.109 END TEST ublk 00:15:59.109 ************************************ 00:15:59.109 15:20:41 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:59.109 15:20:41 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.109 15:20:41 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:59.109 15:20:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:59.109 15:20:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:59.109 15:20:41 -- common/autotest_common.sh@10 -- # set +x 00:15:59.109 ************************************ 00:15:59.109 START TEST ublk_recovery 00:15:59.109 ************************************ 00:15:59.109 15:20:41 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:59.109 * Looking for test storage... 00:15:59.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:59.109 15:20:41 ublk_recovery -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:15:59.109 15:20:41 ublk_recovery -- common/autotest_common.sh@1689 -- # lcov --version 00:15:59.109 15:20:41 ublk_recovery -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:15:59.109 15:20:41 ublk_recovery -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:59.109 15:20:41 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:15:59.109 15:20:41 ublk_recovery -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:59.109 15:20:41 ublk_recovery -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:15:59.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.110 --rc genhtml_branch_coverage=1 00:15:59.110 --rc genhtml_function_coverage=1 00:15:59.110 --rc genhtml_legend=1 00:15:59.110 --rc geninfo_all_blocks=1 00:15:59.110 --rc geninfo_unexecuted_blocks=1 00:15:59.110 00:15:59.110 ' 00:15:59.110 15:20:41 ublk_recovery -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:15:59.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.110 --rc genhtml_branch_coverage=1 00:15:59.110 --rc genhtml_function_coverage=1 00:15:59.110 --rc genhtml_legend=1 00:15:59.110 --rc geninfo_all_blocks=1 00:15:59.110 --rc geninfo_unexecuted_blocks=1 00:15:59.110 00:15:59.110 ' 00:15:59.110 15:20:41 ublk_recovery -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:15:59.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.110 --rc genhtml_branch_coverage=1 00:15:59.110 --rc genhtml_function_coverage=1 00:15:59.110 --rc genhtml_legend=1 00:15:59.110 --rc geninfo_all_blocks=1 00:15:59.110 --rc geninfo_unexecuted_blocks=1 00:15:59.110 00:15:59.110 ' 00:15:59.110 15:20:41 ublk_recovery -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:15:59.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:59.110 --rc genhtml_branch_coverage=1 00:15:59.110 --rc genhtml_function_coverage=1 00:15:59.110 --rc genhtml_legend=1 00:15:59.110 --rc geninfo_all_blocks=1 00:15:59.110 --rc geninfo_unexecuted_blocks=1 00:15:59.110 00:15:59.110 ' 00:15:59.110 15:20:41 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:59.110 15:20:41 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:59.110 15:20:41 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:59.110 15:20:41 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:59.110 15:20:41 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:59.110 15:20:41 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:59.110 15:20:41 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:59.110 15:20:41 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:59.110 15:20:41 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:59.110 15:20:41 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:59.110 15:20:41 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=72979 00:15:59.110 15:20:41 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:59.110 15:20:41 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:59.110 15:20:41 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 72979 00:15:59.110 15:20:41 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 72979 ']' 00:15:59.110 15:20:41 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.110 15:20:41 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:59.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.110 15:20:41 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.110 15:20:41 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:59.110 15:20:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.369 [2024-10-25 15:20:41.916161] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:15:59.369 [2024-10-25 15:20:41.916306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72979 ] 00:15:59.627 [2024-10-25 15:20:42.100226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:59.627 [2024-10-25 15:20:42.226578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.627 [2024-10-25 15:20:42.226611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.565 15:20:43 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:00.565 15:20:43 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:16:00.565 15:20:43 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:00.565 15:20:43 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.565 15:20:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.565 [2024-10-25 15:20:43.161200] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:00.565 [2024-10-25 15:20:43.168016] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:00.565 15:20:43 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.565 15:20:43 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:00.565 15:20:43 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.565 15:20:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.824 malloc0 00:16:00.824 15:20:43 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.824 15:20:43 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:00.824 15:20:43 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.824 15:20:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.824 [2024-10-25 15:20:43.332371] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:00.824 [2024-10-25 15:20:43.332495] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:00.824 [2024-10-25 15:20:43.332511] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:00.824 [2024-10-25 15:20:43.332523] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:00.825 [2024-10-25 15:20:43.340241] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:00.825 [2024-10-25 15:20:43.340267] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:00.825 [2024-10-25 15:20:43.348223] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:00.825 [2024-10-25 15:20:43.348373] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:00.825 [2024-10-25 15:20:43.373225] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:00.825 1 00:16:00.825 15:20:43 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.825 15:20:43 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:01.761 15:20:44 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=73018 00:16:01.761 15:20:44 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:01.761 15:20:44 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:02.020 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:02.020 fio-3.35 00:16:02.020 Starting 1 process 00:16:07.293 15:20:49 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 72979 00:16:07.293 15:20:49 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:12.568 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 72979 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:12.568 15:20:54 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=73126 00:16:12.568 15:20:54 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:12.568 15:20:54 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:12.568 15:20:54 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 73126 00:16:12.568 15:20:54 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 73126 ']' 00:16:12.568 15:20:54 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.568 15:20:54 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:12.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.568 15:20:54 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.568 15:20:54 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:12.568 15:20:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:12.568 [2024-10-25 15:20:54.510863] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:16:12.568 [2024-10-25 15:20:54.511017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73126 ] 00:16:12.568 [2024-10-25 15:20:54.682388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:12.568 [2024-10-25 15:20:54.809825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.568 [2024-10-25 15:20:54.809861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.137 15:20:55 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:13.137 15:20:55 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:16:13.137 15:20:55 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:13.137 15:20:55 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.137 15:20:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.137 [2024-10-25 15:20:55.744211] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:13.137 [2024-10-25 15:20:55.747145] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:13.137 15:20:55 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.137 15:20:55 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:13.137 15:20:55 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.137 15:20:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.439 malloc0 00:16:13.439 15:20:55 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.439 15:20:55 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:13.439 15:20:55 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.439 15:20:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:13.439 [2024-10-25 15:20:55.906355] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:13.439 [2024-10-25 15:20:55.906402] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:13.439 [2024-10-25 15:20:55.906414] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:13.439 [2024-10-25 15:20:55.914244] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:13.439 [2024-10-25 15:20:55.914274] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:16:13.439 [2024-10-25 15:20:55.914284] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:13.439 [2024-10-25 15:20:55.914376] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:13.439 1 00:16:13.439 15:20:55 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.439 15:20:55 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 73018 00:16:13.439 [2024-10-25 15:20:55.922257] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:13.439 [2024-10-25 15:20:55.928922] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:13.439 [2024-10-25 15:20:55.936422] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:13.439 [2024-10-25 15:20:55.936449] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:09.811 00:17:09.811 fio_test: (groupid=0, jobs=1): err= 0: pid=73021: Fri Oct 25 15:21:44 2024 00:17:09.811 read: IOPS=21.2k, BW=82.8MiB/s (86.8MB/s)(4968MiB/60002msec) 00:17:09.811 slat (usec): min=2, max=462, avg= 7.86, stdev= 3.00 00:17:09.811 clat (usec): min=1276, max=6558.1k, avg=2934.75, stdev=43877.92 00:17:09.811 lat (usec): min=1284, max=6558.1k, avg=2942.61, stdev=43877.92 00:17:09.811 clat percentiles (usec): 00:17:09.811 | 1.00th=[ 2008], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2376], 00:17:09.811 | 30.00th=[ 2409], 40.00th=[ 2474], 50.00th=[ 2507], 60.00th=[ 2573], 00:17:09.811 | 70.00th=[ 2606], 80.00th=[ 2671], 90.00th=[ 3032], 95.00th=[ 3851], 00:17:09.811 | 99.00th=[ 5080], 99.50th=[ 5538], 99.90th=[ 6915], 99.95th=[ 7242], 00:17:09.811 | 99.99th=[ 8979] 00:17:09.811 bw ( KiB/s): min=20199, max=101960, per=100.00%, avg=94280.77, stdev=10140.10, samples=107 00:17:09.811 iops : min= 5049, max=25490, avg=23570.15, stdev=2535.08, samples=107 00:17:09.811 write: IOPS=21.2k, BW=82.7MiB/s (86.8MB/s)(4965MiB/60002msec); 0 zone resets 00:17:09.811 slat (usec): min=2, max=753, avg= 7.88, stdev= 3.09 00:17:09.811 clat (usec): min=1216, max=6558.4k, avg=3088.91, stdev=48988.10 00:17:09.811 lat (usec): min=1232, max=6558.4k, avg=3096.79, stdev=48988.11 00:17:09.811 clat percentiles (usec): 00:17:09.811 | 1.00th=[ 2024], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2442], 00:17:09.811 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2606], 60.00th=[ 2671], 00:17:09.811 | 70.00th=[ 2704], 80.00th=[ 2802], 90.00th=[ 3064], 95.00th=[ 3851], 00:17:09.811 | 99.00th=[ 5080], 99.50th=[ 5604], 99.90th=[ 7046], 99.95th=[ 7373], 00:17:09.811 | 99.99th=[ 9241] 00:17:09.811 bw ( KiB/s): min=20774, max=101792, per=100.00%, avg=94194.23, stdev=10027.03, samples=107 00:17:09.811 iops : min= 5193, max=25448, avg=23548.53, stdev=2506.79, samples=107 00:17:09.811 lat (msec) : 2=0.89%, 4=94.83%, 10=4.27%, 20=0.01%, >=2000=0.01% 00:17:09.811 cpu : usr=12.69%, sys=33.20%, ctx=107665, majf=0, minf=13 00:17:09.811 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:09.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:09.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:09.811 issued rwts: total=1271825,1270920,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:09.811 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:09.811 00:17:09.811 Run status group 0 (all jobs): 00:17:09.811 READ: bw=82.8MiB/s (86.8MB/s), 82.8MiB/s-82.8MiB/s (86.8MB/s-86.8MB/s), io=4968MiB (5209MB), run=60002-60002msec 00:17:09.811 WRITE: bw=82.7MiB/s (86.8MB/s), 82.7MiB/s-82.7MiB/s (86.8MB/s-86.8MB/s), io=4965MiB (5206MB), run=60002-60002msec 00:17:09.811 00:17:09.811 Disk stats (read/write): 00:17:09.811 ublkb1: ios=1269116/1268197, merge=0/0, ticks=3600062/3658592, in_queue=7258655, util=99.95% 00:17:09.811 15:21:44 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:09.811 15:21:44 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.811 15:21:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:09.811 [2024-10-25 15:21:44.663399] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:09.811 [2024-10-25 15:21:44.692376] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:09.811 [2024-10-25 15:21:44.692556] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:09.811 [2024-10-25 15:21:44.700241] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:09.811 [2024-10-25 15:21:44.700356] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:09.811 [2024-10-25 15:21:44.700386] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:09.811 15:21:44 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.811 15:21:44 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:09.811 15:21:44 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.811 15:21:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:09.811 [2024-10-25 15:21:44.708408] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:09.811 [2024-10-25 15:21:44.715703] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:09.811 [2024-10-25 15:21:44.715753] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:09.811 15:21:44 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.811 15:21:44 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:09.811 15:21:44 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:09.811 15:21:44 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 73126 00:17:09.811 15:21:44 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 73126 ']' 00:17:09.811 15:21:44 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 73126 00:17:09.811 15:21:44 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:17:09.811 15:21:44 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:09.811 15:21:44 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73126 00:17:09.811 15:21:44 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:09.811 15:21:44 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:09.811 killing process with pid 73126 00:17:09.811 15:21:44 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73126' 00:17:09.811 15:21:44 ublk_recovery -- common/autotest_common.sh@969 -- # kill 73126 00:17:09.811 15:21:44 ublk_recovery -- common/autotest_common.sh@974 -- # wait 73126 00:17:09.811 [2024-10-25 15:21:46.435026] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:09.811 [2024-10-25 15:21:46.435080] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:09.811 00:17:09.811 real 1m6.311s 00:17:09.811 user 1m49.668s 00:17:09.811 sys 0m39.611s 00:17:09.811 15:21:47 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:09.811 15:21:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:09.811 ************************************ 00:17:09.811 END TEST ublk_recovery 00:17:09.811 ************************************ 00:17:09.811 15:21:47 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:17:09.811 15:21:47 -- spdk/autotest.sh@256 -- # timing_exit lib 00:17:09.811 15:21:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:09.811 15:21:47 -- common/autotest_common.sh@10 -- # set +x 00:17:09.811 15:21:47 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:17:09.811 15:21:47 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:17:09.811 15:21:47 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:17:09.811 15:21:47 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:17:09.811 15:21:47 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:09.811 15:21:47 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:09.811 15:21:47 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:17:09.811 15:21:47 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:17:09.811 15:21:47 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:17:09.811 15:21:47 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:17:09.811 15:21:47 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:09.811 15:21:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:09.811 15:21:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:09.811 15:21:47 -- common/autotest_common.sh@10 -- # set +x 00:17:09.811 ************************************ 00:17:09.811 START TEST ftl 00:17:09.811 ************************************ 00:17:09.811 15:21:48 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:09.811 * Looking for test storage... 00:17:09.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:09.811 15:21:48 ftl -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:17:09.811 15:21:48 ftl -- common/autotest_common.sh@1689 -- # lcov --version 00:17:09.811 15:21:48 ftl -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:17:09.811 15:21:48 ftl -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:17:09.811 15:21:48 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:09.811 15:21:48 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:09.811 15:21:48 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:09.811 15:21:48 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:09.811 15:21:48 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:09.811 15:21:48 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:09.811 15:21:48 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:09.811 15:21:48 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:09.811 15:21:48 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:09.811 15:21:48 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:09.811 15:21:48 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:09.811 15:21:48 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:09.811 15:21:48 ftl -- scripts/common.sh@345 -- # : 1 00:17:09.811 15:21:48 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:09.811 15:21:48 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:09.811 15:21:48 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:09.811 15:21:48 ftl -- scripts/common.sh@353 -- # local d=1 00:17:09.811 15:21:48 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:09.811 15:21:48 ftl -- scripts/common.sh@355 -- # echo 1 00:17:09.812 15:21:48 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:09.812 15:21:48 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:09.812 15:21:48 ftl -- scripts/common.sh@353 -- # local d=2 00:17:09.812 15:21:48 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:09.812 15:21:48 ftl -- scripts/common.sh@355 -- # echo 2 00:17:09.812 15:21:48 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:09.812 15:21:48 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:09.812 15:21:48 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:09.812 15:21:48 ftl -- scripts/common.sh@368 -- # return 0 00:17:09.812 15:21:48 ftl -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:09.812 15:21:48 ftl -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:17:09.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.812 --rc genhtml_branch_coverage=1 00:17:09.812 --rc genhtml_function_coverage=1 00:17:09.812 --rc genhtml_legend=1 00:17:09.812 --rc geninfo_all_blocks=1 00:17:09.812 --rc geninfo_unexecuted_blocks=1 00:17:09.812 00:17:09.812 ' 00:17:09.812 15:21:48 ftl -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:17:09.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.812 --rc genhtml_branch_coverage=1 00:17:09.812 --rc genhtml_function_coverage=1 00:17:09.812 --rc genhtml_legend=1 00:17:09.812 --rc geninfo_all_blocks=1 00:17:09.812 --rc geninfo_unexecuted_blocks=1 00:17:09.812 00:17:09.812 ' 00:17:09.812 15:21:48 ftl -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:17:09.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.812 --rc genhtml_branch_coverage=1 00:17:09.812 --rc genhtml_function_coverage=1 00:17:09.812 --rc genhtml_legend=1 00:17:09.812 --rc geninfo_all_blocks=1 00:17:09.812 --rc geninfo_unexecuted_blocks=1 00:17:09.812 00:17:09.812 ' 00:17:09.812 15:21:48 ftl -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:17:09.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.812 --rc genhtml_branch_coverage=1 00:17:09.812 --rc genhtml_function_coverage=1 00:17:09.812 --rc genhtml_legend=1 00:17:09.812 --rc geninfo_all_blocks=1 00:17:09.812 --rc geninfo_unexecuted_blocks=1 00:17:09.812 00:17:09.812 ' 00:17:09.812 15:21:48 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:09.812 15:21:48 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:09.812 15:21:48 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:09.812 15:21:48 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:09.812 15:21:48 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:09.812 15:21:48 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:09.812 15:21:48 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:09.812 15:21:48 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:09.812 15:21:48 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:09.812 15:21:48 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:09.812 15:21:48 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:09.812 15:21:48 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:09.812 15:21:48 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:09.812 15:21:48 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:09.812 15:21:48 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:09.812 15:21:48 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:09.812 15:21:48 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:09.812 15:21:48 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:09.812 15:21:48 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:09.812 15:21:48 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:09.812 15:21:48 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:09.812 15:21:48 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:09.812 15:21:48 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:09.812 15:21:48 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:09.812 15:21:48 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:09.812 15:21:48 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:09.812 15:21:48 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:09.812 15:21:48 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:09.812 15:21:48 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:09.812 15:21:48 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:09.812 15:21:48 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:09.812 15:21:48 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:09.812 15:21:48 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:09.812 15:21:48 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:09.812 15:21:48 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:09.812 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:09.812 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:09.812 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:09.812 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:09.812 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:09.812 15:21:49 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=73939 00:17:09.812 15:21:49 ftl -- ftl/ftl.sh@38 -- # waitforlisten 73939 00:17:09.812 15:21:49 ftl -- common/autotest_common.sh@831 -- # '[' -z 73939 ']' 00:17:09.812 15:21:49 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.812 15:21:49 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:09.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.812 15:21:49 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.812 15:21:49 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:09.812 15:21:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:09.812 15:21:49 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:09.812 [2024-10-25 15:21:49.230794] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:17:09.812 [2024-10-25 15:21:49.230946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73939 ] 00:17:09.812 [2024-10-25 15:21:49.413319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.812 [2024-10-25 15:21:49.539055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.812 15:21:50 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:09.812 15:21:50 ftl -- common/autotest_common.sh@864 -- # return 0 00:17:09.812 15:21:50 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:09.812 15:21:50 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:09.812 15:21:51 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:09.812 15:21:51 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:09.812 15:21:51 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:09.812 15:21:51 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:09.812 15:21:51 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:09.812 15:21:52 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:09.812 15:21:52 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:09.812 15:21:52 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:09.812 15:21:52 ftl -- ftl/ftl.sh@50 -- # break 00:17:09.812 15:21:52 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:09.812 15:21:52 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:09.812 15:21:52 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:09.812 15:21:52 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:09.812 15:21:52 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:09.812 15:21:52 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:09.812 15:21:52 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:09.812 15:21:52 ftl -- ftl/ftl.sh@63 -- # break 00:17:09.812 15:21:52 ftl -- ftl/ftl.sh@66 -- # killprocess 73939 00:17:09.812 15:21:52 ftl -- common/autotest_common.sh@950 -- # '[' -z 73939 ']' 00:17:09.812 15:21:52 ftl -- common/autotest_common.sh@954 -- # kill -0 73939 00:17:09.812 15:21:52 ftl -- common/autotest_common.sh@955 -- # uname 00:17:09.812 15:21:52 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:09.812 15:21:52 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73939 00:17:09.812 killing process with pid 73939 00:17:09.812 15:21:52 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:09.812 15:21:52 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:09.812 15:21:52 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73939' 00:17:09.812 15:21:52 ftl -- common/autotest_common.sh@969 -- # kill 73939 00:17:09.812 15:21:52 ftl -- common/autotest_common.sh@974 -- # wait 73939 00:17:12.352 15:21:54 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:12.352 15:21:54 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:12.352 15:21:54 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:12.352 15:21:54 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:12.352 15:21:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:12.352 ************************************ 00:17:12.352 START TEST ftl_fio_basic 00:17:12.352 ************************************ 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:12.352 * Looking for test storage... 00:17:12.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1689 -- # lcov --version 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:17:12.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.352 --rc genhtml_branch_coverage=1 00:17:12.352 --rc genhtml_function_coverage=1 00:17:12.352 --rc genhtml_legend=1 00:17:12.352 --rc geninfo_all_blocks=1 00:17:12.352 --rc geninfo_unexecuted_blocks=1 00:17:12.352 00:17:12.352 ' 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:17:12.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.352 --rc genhtml_branch_coverage=1 00:17:12.352 --rc genhtml_function_coverage=1 00:17:12.352 --rc genhtml_legend=1 00:17:12.352 --rc geninfo_all_blocks=1 00:17:12.352 --rc geninfo_unexecuted_blocks=1 00:17:12.352 00:17:12.352 ' 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:17:12.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.352 --rc genhtml_branch_coverage=1 00:17:12.352 --rc genhtml_function_coverage=1 00:17:12.352 --rc genhtml_legend=1 00:17:12.352 --rc geninfo_all_blocks=1 00:17:12.352 --rc geninfo_unexecuted_blocks=1 00:17:12.352 00:17:12.352 ' 00:17:12.352 15:21:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:17:12.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.352 --rc genhtml_branch_coverage=1 00:17:12.352 --rc genhtml_function_coverage=1 00:17:12.352 --rc genhtml_legend=1 00:17:12.353 --rc geninfo_all_blocks=1 00:17:12.353 --rc geninfo_unexecuted_blocks=1 00:17:12.353 00:17:12.353 ' 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=74088 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 74088 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 74088 ']' 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:12.353 15:21:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:12.612 [2024-10-25 15:21:55.085198] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:17:12.612 [2024-10-25 15:21:55.085323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74088 ] 00:17:12.612 [2024-10-25 15:21:55.266608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:12.871 [2024-10-25 15:21:55.387538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.871 [2024-10-25 15:21:55.387702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.871 [2024-10-25 15:21:55.388286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.807 15:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:13.807 15:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:17:13.807 15:21:56 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:13.807 15:21:56 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:13.807 15:21:56 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:13.807 15:21:56 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:13.807 15:21:56 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:13.807 15:21:56 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:14.067 15:21:56 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:14.067 15:21:56 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:14.067 15:21:56 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:14.067 15:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:14.067 15:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:14.067 15:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:14.067 15:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:14.067 15:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:14.067 15:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:14.067 { 00:17:14.067 "name": "nvme0n1", 00:17:14.067 "aliases": [ 00:17:14.067 "06ae7868-f19e-4d6b-8e4a-f292ae3976b5" 00:17:14.067 ], 00:17:14.067 "product_name": "NVMe disk", 00:17:14.067 "block_size": 4096, 00:17:14.067 "num_blocks": 1310720, 00:17:14.067 "uuid": "06ae7868-f19e-4d6b-8e4a-f292ae3976b5", 00:17:14.067 "numa_id": -1, 00:17:14.067 "assigned_rate_limits": { 00:17:14.067 "rw_ios_per_sec": 0, 00:17:14.067 "rw_mbytes_per_sec": 0, 00:17:14.067 "r_mbytes_per_sec": 0, 00:17:14.067 "w_mbytes_per_sec": 0 00:17:14.067 }, 00:17:14.067 "claimed": false, 00:17:14.067 "zoned": false, 00:17:14.067 "supported_io_types": { 00:17:14.067 "read": true, 00:17:14.067 "write": true, 00:17:14.067 "unmap": true, 00:17:14.067 "flush": true, 00:17:14.067 "reset": true, 00:17:14.067 "nvme_admin": true, 00:17:14.067 "nvme_io": true, 00:17:14.067 "nvme_io_md": false, 00:17:14.067 "write_zeroes": true, 00:17:14.067 "zcopy": false, 00:17:14.067 "get_zone_info": false, 00:17:14.067 "zone_management": false, 00:17:14.067 "zone_append": false, 00:17:14.067 "compare": true, 00:17:14.067 "compare_and_write": false, 00:17:14.067 "abort": true, 00:17:14.067 "seek_hole": false, 00:17:14.067 "seek_data": false, 00:17:14.067 "copy": true, 00:17:14.067 "nvme_iov_md": false 00:17:14.067 }, 00:17:14.067 "driver_specific": { 00:17:14.067 "nvme": [ 00:17:14.067 { 00:17:14.067 "pci_address": "0000:00:11.0", 00:17:14.067 "trid": { 00:17:14.067 "trtype": "PCIe", 00:17:14.067 "traddr": "0000:00:11.0" 00:17:14.067 }, 00:17:14.067 "ctrlr_data": { 00:17:14.067 "cntlid": 0, 00:17:14.067 "vendor_id": "0x1b36", 00:17:14.067 "model_number": "QEMU NVMe Ctrl", 00:17:14.067 "serial_number": "12341", 00:17:14.067 "firmware_revision": "8.0.0", 00:17:14.067 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:14.067 "oacs": { 00:17:14.067 "security": 0, 00:17:14.067 "format": 1, 00:17:14.067 "firmware": 0, 00:17:14.067 "ns_manage": 1 00:17:14.067 }, 00:17:14.067 "multi_ctrlr": false, 00:17:14.067 "ana_reporting": false 00:17:14.067 }, 00:17:14.067 "vs": { 00:17:14.067 "nvme_version": "1.4" 00:17:14.067 }, 00:17:14.067 "ns_data": { 00:17:14.067 "id": 1, 00:17:14.067 "can_share": false 00:17:14.067 } 00:17:14.067 } 00:17:14.067 ], 00:17:14.067 "mp_policy": "active_passive" 00:17:14.067 } 00:17:14.067 } 00:17:14.067 ]' 00:17:14.067 15:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:14.326 15:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:14.326 15:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:14.326 15:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:14.326 15:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:14.326 15:21:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:17:14.326 15:21:56 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:14.326 15:21:56 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:14.326 15:21:56 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:14.326 15:21:56 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:14.326 15:21:56 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:14.585 15:21:57 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:14.585 15:21:57 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:14.585 15:21:57 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=60050628-e438-48f8-ae17-e7e7cafbeb3a 00:17:14.585 15:21:57 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 60050628-e438-48f8-ae17-e7e7cafbeb3a 00:17:14.844 15:21:57 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=af757f87-a176-47ba-abc9-c157df4011bc 00:17:14.844 15:21:57 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 af757f87-a176-47ba-abc9-c157df4011bc 00:17:14.844 15:21:57 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:14.844 15:21:57 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:14.844 15:21:57 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=af757f87-a176-47ba-abc9-c157df4011bc 00:17:14.844 15:21:57 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:14.844 15:21:57 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size af757f87-a176-47ba-abc9-c157df4011bc 00:17:14.844 15:21:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=af757f87-a176-47ba-abc9-c157df4011bc 00:17:14.844 15:21:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:14.844 15:21:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:14.844 15:21:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:14.844 15:21:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b af757f87-a176-47ba-abc9-c157df4011bc 00:17:15.104 15:21:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:15.104 { 00:17:15.104 "name": "af757f87-a176-47ba-abc9-c157df4011bc", 00:17:15.104 "aliases": [ 00:17:15.104 "lvs/nvme0n1p0" 00:17:15.104 ], 00:17:15.104 "product_name": "Logical Volume", 00:17:15.104 "block_size": 4096, 00:17:15.104 "num_blocks": 26476544, 00:17:15.104 "uuid": "af757f87-a176-47ba-abc9-c157df4011bc", 00:17:15.104 "assigned_rate_limits": { 00:17:15.104 "rw_ios_per_sec": 0, 00:17:15.104 "rw_mbytes_per_sec": 0, 00:17:15.104 "r_mbytes_per_sec": 0, 00:17:15.104 "w_mbytes_per_sec": 0 00:17:15.104 }, 00:17:15.104 "claimed": false, 00:17:15.104 "zoned": false, 00:17:15.104 "supported_io_types": { 00:17:15.104 "read": true, 00:17:15.104 "write": true, 00:17:15.104 "unmap": true, 00:17:15.104 "flush": false, 00:17:15.104 "reset": true, 00:17:15.104 "nvme_admin": false, 00:17:15.104 "nvme_io": false, 00:17:15.104 "nvme_io_md": false, 00:17:15.104 "write_zeroes": true, 00:17:15.104 "zcopy": false, 00:17:15.104 "get_zone_info": false, 00:17:15.104 "zone_management": false, 00:17:15.104 "zone_append": false, 00:17:15.104 "compare": false, 00:17:15.104 "compare_and_write": false, 00:17:15.104 "abort": false, 00:17:15.104 "seek_hole": true, 00:17:15.104 "seek_data": true, 00:17:15.104 "copy": false, 00:17:15.104 "nvme_iov_md": false 00:17:15.104 }, 00:17:15.104 "driver_specific": { 00:17:15.104 "lvol": { 00:17:15.104 "lvol_store_uuid": "60050628-e438-48f8-ae17-e7e7cafbeb3a", 00:17:15.104 "base_bdev": "nvme0n1", 00:17:15.104 "thin_provision": true, 00:17:15.104 "num_allocated_clusters": 0, 00:17:15.104 "snapshot": false, 00:17:15.104 "clone": false, 00:17:15.104 "esnap_clone": false 00:17:15.104 } 00:17:15.104 } 00:17:15.104 } 00:17:15.104 ]' 00:17:15.104 15:21:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:15.104 15:21:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:15.104 15:21:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:15.104 15:21:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:15.104 15:21:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:15.104 15:21:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:17:15.104 15:21:57 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:15.104 15:21:57 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:15.104 15:21:57 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:15.363 15:21:58 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:15.363 15:21:58 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:15.363 15:21:58 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size af757f87-a176-47ba-abc9-c157df4011bc 00:17:15.363 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=af757f87-a176-47ba-abc9-c157df4011bc 00:17:15.363 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:15.363 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:15.363 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:15.363 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b af757f87-a176-47ba-abc9-c157df4011bc 00:17:15.622 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:15.622 { 00:17:15.622 "name": "af757f87-a176-47ba-abc9-c157df4011bc", 00:17:15.622 "aliases": [ 00:17:15.622 "lvs/nvme0n1p0" 00:17:15.622 ], 00:17:15.622 "product_name": "Logical Volume", 00:17:15.622 "block_size": 4096, 00:17:15.622 "num_blocks": 26476544, 00:17:15.622 "uuid": "af757f87-a176-47ba-abc9-c157df4011bc", 00:17:15.622 "assigned_rate_limits": { 00:17:15.622 "rw_ios_per_sec": 0, 00:17:15.622 "rw_mbytes_per_sec": 0, 00:17:15.622 "r_mbytes_per_sec": 0, 00:17:15.622 "w_mbytes_per_sec": 0 00:17:15.622 }, 00:17:15.622 "claimed": false, 00:17:15.622 "zoned": false, 00:17:15.622 "supported_io_types": { 00:17:15.622 "read": true, 00:17:15.622 "write": true, 00:17:15.622 "unmap": true, 00:17:15.622 "flush": false, 00:17:15.622 "reset": true, 00:17:15.622 "nvme_admin": false, 00:17:15.622 "nvme_io": false, 00:17:15.622 "nvme_io_md": false, 00:17:15.622 "write_zeroes": true, 00:17:15.622 "zcopy": false, 00:17:15.622 "get_zone_info": false, 00:17:15.622 "zone_management": false, 00:17:15.622 "zone_append": false, 00:17:15.622 "compare": false, 00:17:15.622 "compare_and_write": false, 00:17:15.622 "abort": false, 00:17:15.622 "seek_hole": true, 00:17:15.622 "seek_data": true, 00:17:15.623 "copy": false, 00:17:15.623 "nvme_iov_md": false 00:17:15.623 }, 00:17:15.623 "driver_specific": { 00:17:15.623 "lvol": { 00:17:15.623 "lvol_store_uuid": "60050628-e438-48f8-ae17-e7e7cafbeb3a", 00:17:15.623 "base_bdev": "nvme0n1", 00:17:15.623 "thin_provision": true, 00:17:15.623 "num_allocated_clusters": 0, 00:17:15.623 "snapshot": false, 00:17:15.623 "clone": false, 00:17:15.623 "esnap_clone": false 00:17:15.623 } 00:17:15.623 } 00:17:15.623 } 00:17:15.623 ]' 00:17:15.623 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:15.623 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:15.623 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:15.623 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:15.623 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:15.623 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:17:15.623 15:21:58 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:15.623 15:21:58 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:15.882 15:21:58 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:15.882 15:21:58 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:15.882 15:21:58 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:15.882 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:15.882 15:21:58 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size af757f87-a176-47ba-abc9-c157df4011bc 00:17:15.882 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=af757f87-a176-47ba-abc9-c157df4011bc 00:17:15.882 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:15.882 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:17:15.882 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:17:15.882 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b af757f87-a176-47ba-abc9-c157df4011bc 00:17:16.141 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:16.141 { 00:17:16.141 "name": "af757f87-a176-47ba-abc9-c157df4011bc", 00:17:16.141 "aliases": [ 00:17:16.141 "lvs/nvme0n1p0" 00:17:16.141 ], 00:17:16.141 "product_name": "Logical Volume", 00:17:16.141 "block_size": 4096, 00:17:16.141 "num_blocks": 26476544, 00:17:16.141 "uuid": "af757f87-a176-47ba-abc9-c157df4011bc", 00:17:16.141 "assigned_rate_limits": { 00:17:16.141 "rw_ios_per_sec": 0, 00:17:16.141 "rw_mbytes_per_sec": 0, 00:17:16.141 "r_mbytes_per_sec": 0, 00:17:16.141 "w_mbytes_per_sec": 0 00:17:16.141 }, 00:17:16.141 "claimed": false, 00:17:16.141 "zoned": false, 00:17:16.141 "supported_io_types": { 00:17:16.141 "read": true, 00:17:16.141 "write": true, 00:17:16.141 "unmap": true, 00:17:16.141 "flush": false, 00:17:16.141 "reset": true, 00:17:16.141 "nvme_admin": false, 00:17:16.141 "nvme_io": false, 00:17:16.141 "nvme_io_md": false, 00:17:16.141 "write_zeroes": true, 00:17:16.141 "zcopy": false, 00:17:16.141 "get_zone_info": false, 00:17:16.141 "zone_management": false, 00:17:16.141 "zone_append": false, 00:17:16.141 "compare": false, 00:17:16.141 "compare_and_write": false, 00:17:16.141 "abort": false, 00:17:16.141 "seek_hole": true, 00:17:16.141 "seek_data": true, 00:17:16.141 "copy": false, 00:17:16.141 "nvme_iov_md": false 00:17:16.141 }, 00:17:16.141 "driver_specific": { 00:17:16.141 "lvol": { 00:17:16.141 "lvol_store_uuid": "60050628-e438-48f8-ae17-e7e7cafbeb3a", 00:17:16.141 "base_bdev": "nvme0n1", 00:17:16.141 "thin_provision": true, 00:17:16.141 "num_allocated_clusters": 0, 00:17:16.141 "snapshot": false, 00:17:16.141 "clone": false, 00:17:16.141 "esnap_clone": false 00:17:16.141 } 00:17:16.141 } 00:17:16.141 } 00:17:16.141 ]' 00:17:16.141 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:16.141 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:17:16.141 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:16.141 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:16.141 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:16.141 15:21:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:17:16.141 15:21:58 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:16.141 15:21:58 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:16.141 15:21:58 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d af757f87-a176-47ba-abc9-c157df4011bc -c nvc0n1p0 --l2p_dram_limit 60 00:17:16.401 [2024-10-25 15:21:59.071904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.401 [2024-10-25 15:21:59.071964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:16.401 [2024-10-25 15:21:59.071985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:16.401 [2024-10-25 15:21:59.071997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.401 [2024-10-25 15:21:59.072074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.401 [2024-10-25 15:21:59.072088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:16.401 [2024-10-25 15:21:59.072103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:16.401 [2024-10-25 15:21:59.072117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.401 [2024-10-25 15:21:59.072160] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:16.401 [2024-10-25 15:21:59.073274] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:16.401 [2024-10-25 15:21:59.073320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.401 [2024-10-25 15:21:59.073333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:16.401 [2024-10-25 15:21:59.073348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.177 ms 00:17:16.401 [2024-10-25 15:21:59.073358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.401 [2024-10-25 15:21:59.073469] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 07ea29f9-daa2-4536-a810-2ba9e7bdd6b5 00:17:16.401 [2024-10-25 15:21:59.075012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.401 [2024-10-25 15:21:59.075050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:16.401 [2024-10-25 15:21:59.075067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:16.401 [2024-10-25 15:21:59.075080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.401 [2024-10-25 15:21:59.082769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.401 [2024-10-25 15:21:59.082810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:16.401 [2024-10-25 15:21:59.082823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.637 ms 00:17:16.402 [2024-10-25 15:21:59.082838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.402 [2024-10-25 15:21:59.082972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.402 [2024-10-25 15:21:59.082995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:16.402 [2024-10-25 15:21:59.083006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:17:16.402 [2024-10-25 15:21:59.083024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.402 [2024-10-25 15:21:59.083109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.402 [2024-10-25 15:21:59.083125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:16.402 [2024-10-25 15:21:59.083137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:16.402 [2024-10-25 15:21:59.083150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.402 [2024-10-25 15:21:59.083220] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:16.402 [2024-10-25 15:21:59.088687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.402 [2024-10-25 15:21:59.088725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:16.402 [2024-10-25 15:21:59.088756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.517 ms 00:17:16.402 [2024-10-25 15:21:59.088768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.402 [2024-10-25 15:21:59.088818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.402 [2024-10-25 15:21:59.088834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:16.402 [2024-10-25 15:21:59.088848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:16.402 [2024-10-25 15:21:59.088859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.402 [2024-10-25 15:21:59.088932] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:16.402 [2024-10-25 15:21:59.089097] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:16.402 [2024-10-25 15:21:59.089124] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:16.402 [2024-10-25 15:21:59.089139] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:16.402 [2024-10-25 15:21:59.089172] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:16.402 [2024-10-25 15:21:59.089185] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:16.402 [2024-10-25 15:21:59.089213] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:16.402 [2024-10-25 15:21:59.089224] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:16.402 [2024-10-25 15:21:59.089237] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:16.402 [2024-10-25 15:21:59.089248] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:16.402 [2024-10-25 15:21:59.089263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.402 [2024-10-25 15:21:59.089275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:16.402 [2024-10-25 15:21:59.089294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.353 ms 00:17:16.402 [2024-10-25 15:21:59.089305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.402 [2024-10-25 15:21:59.089396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.402 [2024-10-25 15:21:59.089407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:16.402 [2024-10-25 15:21:59.089421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:16.402 [2024-10-25 15:21:59.089432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.402 [2024-10-25 15:21:59.089540] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:16.402 [2024-10-25 15:21:59.089563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:16.402 [2024-10-25 15:21:59.089578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:16.402 [2024-10-25 15:21:59.089589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.402 [2024-10-25 15:21:59.089606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:16.402 [2024-10-25 15:21:59.089616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:16.402 [2024-10-25 15:21:59.089629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:16.402 [2024-10-25 15:21:59.089640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:16.402 [2024-10-25 15:21:59.089653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:16.402 [2024-10-25 15:21:59.089663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:16.402 [2024-10-25 15:21:59.089675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:16.402 [2024-10-25 15:21:59.089685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:16.402 [2024-10-25 15:21:59.089697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:16.402 [2024-10-25 15:21:59.089708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:16.402 [2024-10-25 15:21:59.089720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:16.402 [2024-10-25 15:21:59.089730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.402 [2024-10-25 15:21:59.089746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:16.402 [2024-10-25 15:21:59.089757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:16.402 [2024-10-25 15:21:59.089769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.402 [2024-10-25 15:21:59.089779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:16.402 [2024-10-25 15:21:59.089808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:16.402 [2024-10-25 15:21:59.089819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:16.402 [2024-10-25 15:21:59.089831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:16.402 [2024-10-25 15:21:59.089853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:16.402 [2024-10-25 15:21:59.089865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:16.402 [2024-10-25 15:21:59.089874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:16.402 [2024-10-25 15:21:59.089887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:16.402 [2024-10-25 15:21:59.089897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:16.402 [2024-10-25 15:21:59.089909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:16.402 [2024-10-25 15:21:59.089919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:16.402 [2024-10-25 15:21:59.089931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:16.402 [2024-10-25 15:21:59.089941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:16.402 [2024-10-25 15:21:59.089956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:16.402 [2024-10-25 15:21:59.089966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:16.402 [2024-10-25 15:21:59.089978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:16.402 [2024-10-25 15:21:59.090002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:16.402 [2024-10-25 15:21:59.090015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:16.402 [2024-10-25 15:21:59.090025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:16.402 [2024-10-25 15:21:59.090038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:16.402 [2024-10-25 15:21:59.090048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.402 [2024-10-25 15:21:59.090062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:16.402 [2024-10-25 15:21:59.090071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:16.402 [2024-10-25 15:21:59.090084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.402 [2024-10-25 15:21:59.090097] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:16.402 [2024-10-25 15:21:59.090110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:16.402 [2024-10-25 15:21:59.090121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:16.402 [2024-10-25 15:21:59.090134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:16.402 [2024-10-25 15:21:59.090145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:16.402 [2024-10-25 15:21:59.090160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:16.402 [2024-10-25 15:21:59.090171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:16.402 [2024-10-25 15:21:59.090193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:16.402 [2024-10-25 15:21:59.090204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:16.402 [2024-10-25 15:21:59.090217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:16.402 [2024-10-25 15:21:59.090232] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:16.403 [2024-10-25 15:21:59.090248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:16.403 [2024-10-25 15:21:59.090261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:16.403 [2024-10-25 15:21:59.090275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:16.403 [2024-10-25 15:21:59.090286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:16.403 [2024-10-25 15:21:59.090300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:16.403 [2024-10-25 15:21:59.090312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:16.403 [2024-10-25 15:21:59.090325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:16.403 [2024-10-25 15:21:59.090336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:16.403 [2024-10-25 15:21:59.090350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:16.403 [2024-10-25 15:21:59.090361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:16.403 [2024-10-25 15:21:59.090379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:16.403 [2024-10-25 15:21:59.090389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:16.403 [2024-10-25 15:21:59.090403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:16.403 [2024-10-25 15:21:59.090414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:16.403 [2024-10-25 15:21:59.090428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:16.403 [2024-10-25 15:21:59.090439] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:16.403 [2024-10-25 15:21:59.090454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:16.403 [2024-10-25 15:21:59.090466] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:16.403 [2024-10-25 15:21:59.090479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:16.403 [2024-10-25 15:21:59.090490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:16.403 [2024-10-25 15:21:59.090504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:16.403 [2024-10-25 15:21:59.090516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.403 [2024-10-25 15:21:59.090531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:16.403 [2024-10-25 15:21:59.090544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.039 ms 00:17:16.403 [2024-10-25 15:21:59.090557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.403 [2024-10-25 15:21:59.090630] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:16.403 [2024-10-25 15:21:59.090653] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:21.669 [2024-10-25 15:22:03.556517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.669 [2024-10-25 15:22:03.556580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:21.669 [2024-10-25 15:22:03.556597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4473.133 ms 00:17:21.669 [2024-10-25 15:22:03.556615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.669 [2024-10-25 15:22:03.595867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.669 [2024-10-25 15:22:03.595927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:21.669 [2024-10-25 15:22:03.595944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.941 ms 00:17:21.669 [2024-10-25 15:22:03.595958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.669 [2024-10-25 15:22:03.596114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.669 [2024-10-25 15:22:03.596131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:21.669 [2024-10-25 15:22:03.596142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:21.669 [2024-10-25 15:22:03.596158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.669 [2024-10-25 15:22:03.653450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.669 [2024-10-25 15:22:03.653517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:21.669 [2024-10-25 15:22:03.653538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.320 ms 00:17:21.669 [2024-10-25 15:22:03.653563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.669 [2024-10-25 15:22:03.653625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.669 [2024-10-25 15:22:03.653644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:21.669 [2024-10-25 15:22:03.653659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:21.669 [2024-10-25 15:22:03.653676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.669 [2024-10-25 15:22:03.654389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.669 [2024-10-25 15:22:03.654451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:21.669 [2024-10-25 15:22:03.654482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:17:21.669 [2024-10-25 15:22:03.654502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.669 [2024-10-25 15:22:03.654708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.669 [2024-10-25 15:22:03.654748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:21.669 [2024-10-25 15:22:03.654771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:17:21.669 [2024-10-25 15:22:03.654804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.669 [2024-10-25 15:22:03.677630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.669 [2024-10-25 15:22:03.677695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:21.669 [2024-10-25 15:22:03.677714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.805 ms 00:17:21.669 [2024-10-25 15:22:03.677730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.669 [2024-10-25 15:22:03.691449] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:21.669 [2024-10-25 15:22:03.708560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.669 [2024-10-25 15:22:03.708641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:21.669 [2024-10-25 15:22:03.708663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.725 ms 00:17:21.669 [2024-10-25 15:22:03.708674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.669 [2024-10-25 15:22:03.798927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.669 [2024-10-25 15:22:03.798992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:21.669 [2024-10-25 15:22:03.799013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.329 ms 00:17:21.669 [2024-10-25 15:22:03.799024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.669 [2024-10-25 15:22:03.799254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.669 [2024-10-25 15:22:03.799272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:21.669 [2024-10-25 15:22:03.799290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:17:21.669 [2024-10-25 15:22:03.799300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.669 [2024-10-25 15:22:03.834324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.669 [2024-10-25 15:22:03.834370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:21.669 [2024-10-25 15:22:03.834389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.010 ms 00:17:21.669 [2024-10-25 15:22:03.834402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.669 [2024-10-25 15:22:03.870564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.669 [2024-10-25 15:22:03.870604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:21.669 [2024-10-25 15:22:03.870622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.168 ms 00:17:21.669 [2024-10-25 15:22:03.870632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.669 [2024-10-25 15:22:03.871417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.669 [2024-10-25 15:22:03.871453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:21.669 [2024-10-25 15:22:03.871469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:17:21.669 [2024-10-25 15:22:03.871480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.669 [2024-10-25 15:22:03.973933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.669 [2024-10-25 15:22:03.973993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:21.669 [2024-10-25 15:22:03.974016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.546 ms 00:17:21.669 [2024-10-25 15:22:03.974027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.669 [2024-10-25 15:22:04.011668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.669 [2024-10-25 15:22:04.011730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:21.669 [2024-10-25 15:22:04.011750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.588 ms 00:17:21.669 [2024-10-25 15:22:04.011761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.669 [2024-10-25 15:22:04.051813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.669 [2024-10-25 15:22:04.051890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:21.669 [2024-10-25 15:22:04.051911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.050 ms 00:17:21.669 [2024-10-25 15:22:04.051922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.669 [2024-10-25 15:22:04.090019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.669 [2024-10-25 15:22:04.090078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:21.669 [2024-10-25 15:22:04.090099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.084 ms 00:17:21.669 [2024-10-25 15:22:04.090110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.669 [2024-10-25 15:22:04.090185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.669 [2024-10-25 15:22:04.090199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:21.669 [2024-10-25 15:22:04.090217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:21.669 [2024-10-25 15:22:04.090231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.669 [2024-10-25 15:22:04.090375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.669 [2024-10-25 15:22:04.090388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:21.669 [2024-10-25 15:22:04.090402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:21.669 [2024-10-25 15:22:04.090412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.669 [2024-10-25 15:22:04.091782] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5027.484 ms, result 0 00:17:21.669 { 00:17:21.669 "name": "ftl0", 00:17:21.669 "uuid": "07ea29f9-daa2-4536-a810-2ba9e7bdd6b5" 00:17:21.669 } 00:17:21.669 15:22:04 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:21.669 15:22:04 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:17:21.669 15:22:04 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:21.669 15:22:04 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:17:21.669 15:22:04 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:21.669 15:22:04 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:21.669 15:22:04 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:21.669 15:22:04 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:21.929 [ 00:17:21.929 { 00:17:21.929 "name": "ftl0", 00:17:21.929 "aliases": [ 00:17:21.929 "07ea29f9-daa2-4536-a810-2ba9e7bdd6b5" 00:17:21.929 ], 00:17:21.929 "product_name": "FTL disk", 00:17:21.929 "block_size": 4096, 00:17:21.929 "num_blocks": 20971520, 00:17:21.929 "uuid": "07ea29f9-daa2-4536-a810-2ba9e7bdd6b5", 00:17:21.929 "assigned_rate_limits": { 00:17:21.929 "rw_ios_per_sec": 0, 00:17:21.929 "rw_mbytes_per_sec": 0, 00:17:21.929 "r_mbytes_per_sec": 0, 00:17:21.929 "w_mbytes_per_sec": 0 00:17:21.929 }, 00:17:21.929 "claimed": false, 00:17:21.929 "zoned": false, 00:17:21.929 "supported_io_types": { 00:17:21.929 "read": true, 00:17:21.929 "write": true, 00:17:21.929 "unmap": true, 00:17:21.929 "flush": true, 00:17:21.929 "reset": false, 00:17:21.929 "nvme_admin": false, 00:17:21.929 "nvme_io": false, 00:17:21.929 "nvme_io_md": false, 00:17:21.929 "write_zeroes": true, 00:17:21.929 "zcopy": false, 00:17:21.929 "get_zone_info": false, 00:17:21.929 "zone_management": false, 00:17:21.929 "zone_append": false, 00:17:21.929 "compare": false, 00:17:21.929 "compare_and_write": false, 00:17:21.929 "abort": false, 00:17:21.929 "seek_hole": false, 00:17:21.929 "seek_data": false, 00:17:21.929 "copy": false, 00:17:21.929 "nvme_iov_md": false 00:17:21.929 }, 00:17:21.929 "driver_specific": { 00:17:21.929 "ftl": { 00:17:21.929 "base_bdev": "af757f87-a176-47ba-abc9-c157df4011bc", 00:17:21.929 "cache": "nvc0n1p0" 00:17:21.929 } 00:17:21.929 } 00:17:21.929 } 00:17:21.929 ] 00:17:21.929 15:22:04 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:17:21.929 15:22:04 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:21.929 15:22:04 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:22.189 15:22:04 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:22.189 15:22:04 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:22.189 [2024-10-25 15:22:04.914761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.189 [2024-10-25 15:22:04.914822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:22.189 [2024-10-25 15:22:04.914839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:22.189 [2024-10-25 15:22:04.914853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.189 [2024-10-25 15:22:04.914894] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:22.487 [2024-10-25 15:22:04.919203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.487 [2024-10-25 15:22:04.919258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:22.487 [2024-10-25 15:22:04.919275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.286 ms 00:17:22.487 [2024-10-25 15:22:04.919286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.487 [2024-10-25 15:22:04.919741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.487 [2024-10-25 15:22:04.919759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:22.487 [2024-10-25 15:22:04.919773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:17:22.487 [2024-10-25 15:22:04.919783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.487 [2024-10-25 15:22:04.922339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.487 [2024-10-25 15:22:04.922374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:22.487 [2024-10-25 15:22:04.922396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.527 ms 00:17:22.487 [2024-10-25 15:22:04.922410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.487 [2024-10-25 15:22:04.927629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.487 [2024-10-25 15:22:04.927666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:22.487 [2024-10-25 15:22:04.927682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.182 ms 00:17:22.487 [2024-10-25 15:22:04.927692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.487 [2024-10-25 15:22:04.966292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.487 [2024-10-25 15:22:04.966347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:22.487 [2024-10-25 15:22:04.966366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.579 ms 00:17:22.487 [2024-10-25 15:22:04.966377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.487 [2024-10-25 15:22:04.988627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.487 [2024-10-25 15:22:04.988699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:22.487 [2024-10-25 15:22:04.988720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.195 ms 00:17:22.487 [2024-10-25 15:22:04.988730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.487 [2024-10-25 15:22:04.988942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.487 [2024-10-25 15:22:04.988957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:22.487 [2024-10-25 15:22:04.988971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:17:22.487 [2024-10-25 15:22:04.988981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.487 [2024-10-25 15:22:05.025320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.487 [2024-10-25 15:22:05.025371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:22.487 [2024-10-25 15:22:05.025389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.362 ms 00:17:22.487 [2024-10-25 15:22:05.025399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.487 [2024-10-25 15:22:05.062397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.487 [2024-10-25 15:22:05.062441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:22.487 [2024-10-25 15:22:05.062459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.996 ms 00:17:22.487 [2024-10-25 15:22:05.062470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.487 [2024-10-25 15:22:05.099029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.487 [2024-10-25 15:22:05.099088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:22.487 [2024-10-25 15:22:05.099107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.561 ms 00:17:22.487 [2024-10-25 15:22:05.099117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.487 [2024-10-25 15:22:05.135282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.487 [2024-10-25 15:22:05.135326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:22.487 [2024-10-25 15:22:05.135343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.078 ms 00:17:22.487 [2024-10-25 15:22:05.135353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.487 [2024-10-25 15:22:05.135413] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:22.487 [2024-10-25 15:22:05.135430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:22.487 [2024-10-25 15:22:05.135446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.135985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:22.488 [2024-10-25 15:22:05.136624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:22.489 [2024-10-25 15:22:05.136634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:22.489 [2024-10-25 15:22:05.136647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:22.489 [2024-10-25 15:22:05.136660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:22.489 [2024-10-25 15:22:05.136674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:22.489 [2024-10-25 15:22:05.136685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:22.489 [2024-10-25 15:22:05.136700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:22.489 [2024-10-25 15:22:05.136711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:22.489 [2024-10-25 15:22:05.136724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:22.489 [2024-10-25 15:22:05.136742] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:22.489 [2024-10-25 15:22:05.136755] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 07ea29f9-daa2-4536-a810-2ba9e7bdd6b5 00:17:22.489 [2024-10-25 15:22:05.136767] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:22.489 [2024-10-25 15:22:05.136782] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:22.489 [2024-10-25 15:22:05.136792] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:22.489 [2024-10-25 15:22:05.136806] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:22.489 [2024-10-25 15:22:05.136817] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:22.489 [2024-10-25 15:22:05.136833] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:22.489 [2024-10-25 15:22:05.136843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:22.489 [2024-10-25 15:22:05.136855] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:22.489 [2024-10-25 15:22:05.136864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:22.489 [2024-10-25 15:22:05.136877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.489 [2024-10-25 15:22:05.136887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:22.489 [2024-10-25 15:22:05.136902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.475 ms 00:17:22.489 [2024-10-25 15:22:05.136920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.489 [2024-10-25 15:22:05.157699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.489 [2024-10-25 15:22:05.157740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:22.489 [2024-10-25 15:22:05.157758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.734 ms 00:17:22.489 [2024-10-25 15:22:05.157772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.489 [2024-10-25 15:22:05.158391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.489 [2024-10-25 15:22:05.158406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:22.489 [2024-10-25 15:22:05.158420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:17:22.489 [2024-10-25 15:22:05.158430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.754 [2024-10-25 15:22:05.226797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.754 [2024-10-25 15:22:05.226860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:22.754 [2024-10-25 15:22:05.226889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.754 [2024-10-25 15:22:05.226901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.754 [2024-10-25 15:22:05.226988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.754 [2024-10-25 15:22:05.227000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:22.754 [2024-10-25 15:22:05.227013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.754 [2024-10-25 15:22:05.227023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.754 [2024-10-25 15:22:05.227170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.754 [2024-10-25 15:22:05.227211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:22.754 [2024-10-25 15:22:05.227226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.754 [2024-10-25 15:22:05.227239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.754 [2024-10-25 15:22:05.227276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.754 [2024-10-25 15:22:05.227287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:22.754 [2024-10-25 15:22:05.227318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.754 [2024-10-25 15:22:05.227329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.754 [2024-10-25 15:22:05.357709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.754 [2024-10-25 15:22:05.357772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:22.754 [2024-10-25 15:22:05.357790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.754 [2024-10-25 15:22:05.357805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.754 [2024-10-25 15:22:05.461341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.754 [2024-10-25 15:22:05.461627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:22.754 [2024-10-25 15:22:05.461669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.754 [2024-10-25 15:22:05.461685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.754 [2024-10-25 15:22:05.461853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.754 [2024-10-25 15:22:05.461875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:22.754 [2024-10-25 15:22:05.461890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.754 [2024-10-25 15:22:05.461900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.754 [2024-10-25 15:22:05.461997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.754 [2024-10-25 15:22:05.462010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:22.754 [2024-10-25 15:22:05.462024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.754 [2024-10-25 15:22:05.462035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.754 [2024-10-25 15:22:05.462200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.754 [2024-10-25 15:22:05.462215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:22.754 [2024-10-25 15:22:05.462229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.754 [2024-10-25 15:22:05.462239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.754 [2024-10-25 15:22:05.462296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.754 [2024-10-25 15:22:05.462312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:22.754 [2024-10-25 15:22:05.462336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.754 [2024-10-25 15:22:05.462352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.754 [2024-10-25 15:22:05.462422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.754 [2024-10-25 15:22:05.462444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:22.755 [2024-10-25 15:22:05.462466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.755 [2024-10-25 15:22:05.462483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.755 [2024-10-25 15:22:05.462572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.755 [2024-10-25 15:22:05.462594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:22.755 [2024-10-25 15:22:05.462616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.755 [2024-10-25 15:22:05.462634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.755 [2024-10-25 15:22:05.462874] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 548.945 ms, result 0 00:17:22.755 true 00:17:23.013 15:22:05 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 74088 00:17:23.013 15:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 74088 ']' 00:17:23.013 15:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 74088 00:17:23.013 15:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:17:23.013 15:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:23.013 15:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74088 00:17:23.013 killing process with pid 74088 00:17:23.013 15:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:23.013 15:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:23.013 15:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74088' 00:17:23.013 15:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 74088 00:17:23.013 15:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 74088 00:17:29.584 15:22:11 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:29.584 15:22:11 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:29.584 15:22:11 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:29.584 15:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:29.584 15:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:29.584 15:22:11 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:29.584 15:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:29.584 15:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:29.584 15:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:29.584 15:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:29.584 15:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:29.584 15:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:29.584 15:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:29.584 15:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:29.585 15:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:29.585 15:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:29.585 15:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:29.585 15:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:29.585 15:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:29.585 15:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:29.585 15:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:29.585 15:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:29.585 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:29.585 fio-3.35 00:17:29.585 Starting 1 thread 00:17:33.780 00:17:33.780 test: (groupid=0, jobs=1): err= 0: pid=74312: Fri Oct 25 15:22:16 2024 00:17:33.780 read: IOPS=1025, BW=68.1MiB/s (71.4MB/s)(255MiB/3738msec) 00:17:33.780 slat (nsec): min=4340, max=40281, avg=6572.87, stdev=3016.30 00:17:33.780 clat (usec): min=261, max=1802, avg=435.56, stdev=71.32 00:17:33.780 lat (usec): min=267, max=1807, avg=442.13, stdev=71.67 00:17:33.780 clat percentiles (usec): 00:17:33.780 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 343], 20.00th=[ 392], 00:17:33.780 | 30.00th=[ 400], 40.00th=[ 412], 50.00th=[ 429], 60.00th=[ 457], 00:17:33.780 | 70.00th=[ 465], 80.00th=[ 482], 90.00th=[ 523], 95.00th=[ 537], 00:17:33.780 | 99.00th=[ 603], 99.50th=[ 644], 99.90th=[ 914], 99.95th=[ 1401], 00:17:33.780 | 99.99th=[ 1795] 00:17:33.780 write: IOPS=1032, BW=68.6MiB/s (71.9MB/s)(256MiB/3733msec); 0 zone resets 00:17:33.780 slat (usec): min=15, max=135, avg=20.40, stdev= 6.20 00:17:33.780 clat (usec): min=323, max=2298, avg=498.40, stdev=82.08 00:17:33.780 lat (usec): min=353, max=2317, avg=518.81, stdev=82.41 00:17:33.780 clat percentiles (usec): 00:17:33.780 | 1.00th=[ 351], 5.00th=[ 408], 10.00th=[ 416], 20.00th=[ 429], 00:17:33.780 | 30.00th=[ 449], 40.00th=[ 478], 50.00th=[ 490], 60.00th=[ 506], 00:17:33.780 | 70.00th=[ 537], 80.00th=[ 553], 90.00th=[ 586], 95.00th=[ 619], 00:17:33.780 | 99.00th=[ 783], 99.50th=[ 824], 99.90th=[ 922], 99.95th=[ 930], 00:17:33.780 | 99.99th=[ 2311] 00:17:33.780 bw ( KiB/s): min=67592, max=72760, per=100.00%, avg=70331.43, stdev=2106.07, samples=7 00:17:33.780 iops : min= 994, max= 1070, avg=1034.29, stdev=30.97, samples=7 00:17:33.780 lat (usec) : 500=71.04%, 750=28.25%, 1000=0.66% 00:17:33.780 lat (msec) : 2=0.04%, 4=0.01% 00:17:33.780 cpu : usr=99.01%, sys=0.32%, ctx=7, majf=0, minf=1169 00:17:33.781 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:33.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.781 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:33.781 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:33.781 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:33.781 00:17:33.781 Run status group 0 (all jobs): 00:17:33.781 READ: bw=68.1MiB/s (71.4MB/s), 68.1MiB/s-68.1MiB/s (71.4MB/s-71.4MB/s), io=255MiB (267MB), run=3738-3738msec 00:17:33.781 WRITE: bw=68.6MiB/s (71.9MB/s), 68.6MiB/s-68.6MiB/s (71.9MB/s-71.9MB/s), io=256MiB (269MB), run=3733-3733msec 00:17:35.724 ----------------------------------------------------- 00:17:35.724 Suppressions used: 00:17:35.724 count bytes template 00:17:35.724 1 5 /usr/src/fio/parse.c 00:17:35.724 1 8 libtcmalloc_minimal.so 00:17:35.724 1 904 libcrypto.so 00:17:35.724 ----------------------------------------------------- 00:17:35.724 00:17:35.724 15:22:18 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:35.724 15:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:35.724 15:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:35.724 15:22:18 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:35.724 15:22:18 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:35.724 15:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:35.724 15:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:35.724 15:22:18 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:35.724 15:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:35.724 15:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:35.724 15:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:35.724 15:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:35.724 15:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:35.724 15:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:35.724 15:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:35.724 15:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:35.724 15:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:35.724 15:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:35.724 15:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:35.984 15:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:35.984 15:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:35.984 15:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:35.984 15:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:35.984 15:22:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:35.984 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:35.984 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:35.984 fio-3.35 00:17:35.984 Starting 2 threads 00:18:08.072 00:18:08.072 first_half: (groupid=0, jobs=1): err= 0: pid=74415: Fri Oct 25 15:22:46 2024 00:18:08.072 read: IOPS=2478, BW=9915KiB/s (10.2MB/s)(255MiB/26322msec) 00:18:08.072 slat (nsec): min=3364, max=50430, avg=5949.69, stdev=1936.91 00:18:08.072 clat (usec): min=775, max=830391, avg=37659.54, stdev=38117.27 00:18:08.072 lat (usec): min=783, max=830396, avg=37665.49, stdev=38117.36 00:18:08.072 clat percentiles (msec): 00:18:08.072 | 1.00th=[ 9], 5.00th=[ 29], 10.00th=[ 32], 20.00th=[ 33], 00:18:08.072 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:18:08.072 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 47], 00:18:08.072 | 99.00th=[ 171], 99.50th=[ 194], 99.90th=[ 735], 99.95th=[ 743], 00:18:08.072 | 99.99th=[ 760] 00:18:08.072 write: IOPS=2972, BW=11.6MiB/s (12.2MB/s)(256MiB/22045msec); 0 zone resets 00:18:08.072 slat (usec): min=4, max=1182, avg= 8.38, stdev= 9.92 00:18:08.072 clat (usec): min=396, max=176393, avg=13891.01, stdev=23381.95 00:18:08.072 lat (usec): min=432, max=176403, avg=13899.38, stdev=23382.21 00:18:08.072 clat percentiles (usec): 00:18:08.072 | 1.00th=[ 996], 5.00th=[ 1254], 10.00th=[ 1500], 20.00th=[ 1975], 00:18:08.072 | 30.00th=[ 3326], 40.00th=[ 5080], 50.00th=[ 6325], 60.00th=[ 7242], 00:18:08.072 | 70.00th=[ 9241], 80.00th=[ 12911], 90.00th=[ 38011], 95.00th=[ 80217], 00:18:08.072 | 99.00th=[104334], 99.50th=[112722], 99.90th=[135267], 99.95th=[143655], 00:18:08.072 | 99.99th=[168821] 00:18:08.072 bw ( KiB/s): min= 224, max=40136, per=76.01%, avg=18077.93, stdev=11056.57, samples=29 00:18:08.072 iops : min= 56, max=10034, avg=4519.48, stdev=2764.14, samples=29 00:18:08.072 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.45% 00:18:08.072 lat (msec) : 2=9.77%, 4=7.14%, 10=20.11%, 20=8.01%, 50=47.92% 00:18:08.072 lat (msec) : 100=4.44%, 250=1.94%, 500=0.06%, 750=0.09%, 1000=0.01% 00:18:08.072 cpu : usr=99.23%, sys=0.21%, ctx=44, majf=0, minf=5569 00:18:08.072 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:08.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.072 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:08.072 issued rwts: total=65244,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:08.072 second_half: (groupid=0, jobs=1): err= 0: pid=74416: Fri Oct 25 15:22:46 2024 00:18:08.072 read: IOPS=2490, BW=9963KiB/s (10.2MB/s)(255MiB/26170msec) 00:18:08.072 slat (nsec): min=3432, max=34405, avg=6015.47, stdev=1973.04 00:18:08.072 clat (usec): min=773, max=765471, avg=38249.13, stdev=37337.67 00:18:08.072 lat (usec): min=782, max=765477, avg=38255.15, stdev=37337.73 00:18:08.072 clat percentiles (msec): 00:18:08.072 | 1.00th=[ 7], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:18:08.072 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:18:08.072 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 50], 00:18:08.072 | 99.00th=[ 165], 99.50th=[ 188], 99.90th=[ 751], 99.95th=[ 760], 00:18:08.072 | 99.99th=[ 768] 00:18:08.072 write: IOPS=3273, BW=12.8MiB/s (13.4MB/s)(256MiB/20019msec); 0 zone resets 00:18:08.072 slat (usec): min=4, max=402, avg= 8.44, stdev= 4.85 00:18:08.072 clat (usec): min=420, max=176515, avg=13046.08, stdev=23095.24 00:18:08.072 lat (usec): min=426, max=176525, avg=13054.52, stdev=23095.44 00:18:08.072 clat percentiles (usec): 00:18:08.072 | 1.00th=[ 1037], 5.00th=[ 1319], 10.00th=[ 1516], 20.00th=[ 1860], 00:18:08.072 | 30.00th=[ 2507], 40.00th=[ 4359], 50.00th=[ 5604], 60.00th=[ 6783], 00:18:08.072 | 70.00th=[ 9372], 80.00th=[ 12518], 90.00th=[ 25560], 95.00th=[ 80217], 00:18:08.072 | 99.00th=[104334], 99.50th=[111674], 99.90th=[132645], 99.95th=[152044], 00:18:08.072 | 99.99th=[166724] 00:18:08.072 bw ( KiB/s): min= 920, max=39008, per=91.85%, avg=21845.83, stdev=10818.71, samples=24 00:18:08.072 iops : min= 230, max= 9752, avg=5461.42, stdev=2704.64, samples=24 00:18:08.072 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.32% 00:18:08.072 lat (msec) : 2=11.46%, 4=7.65%, 10=16.91%, 20=9.07%, 50=48.11% 00:18:08.072 lat (msec) : 100=4.41%, 250=1.88%, 500=0.02%, 750=0.04%, 1000=0.06% 00:18:08.072 cpu : usr=99.30%, sys=0.16%, ctx=33, majf=0, minf=5538 00:18:08.072 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:08.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.072 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:08.072 issued rwts: total=65182,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.072 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:08.072 00:18:08.072 Run status group 0 (all jobs): 00:18:08.072 READ: bw=19.4MiB/s (20.3MB/s), 9915KiB/s-9963KiB/s (10.2MB/s-10.2MB/s), io=509MiB (534MB), run=26170-26322msec 00:18:08.072 WRITE: bw=23.2MiB/s (24.4MB/s), 11.6MiB/s-12.8MiB/s (12.2MB/s-13.4MB/s), io=512MiB (537MB), run=20019-22045msec 00:18:08.073 ----------------------------------------------------- 00:18:08.073 Suppressions used: 00:18:08.073 count bytes template 00:18:08.073 2 10 /usr/src/fio/parse.c 00:18:08.073 2 192 /usr/src/fio/iolog.c 00:18:08.073 1 8 libtcmalloc_minimal.so 00:18:08.073 1 904 libcrypto.so 00:18:08.073 ----------------------------------------------------- 00:18:08.073 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:08.073 15:22:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:08.073 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:08.073 fio-3.35 00:18:08.073 Starting 1 thread 00:18:22.951 00:18:22.951 test: (groupid=0, jobs=1): err= 0: pid=74757: Fri Oct 25 15:23:04 2024 00:18:22.951 read: IOPS=7740, BW=30.2MiB/s (31.7MB/s)(255MiB/8424msec) 00:18:22.951 slat (nsec): min=3378, max=36942, avg=5308.12, stdev=1875.04 00:18:22.952 clat (usec): min=606, max=31938, avg=16527.85, stdev=1591.69 00:18:22.952 lat (usec): min=610, max=31942, avg=16533.16, stdev=1591.71 00:18:22.952 clat percentiles (usec): 00:18:22.952 | 1.00th=[15139], 5.00th=[15401], 10.00th=[15533], 20.00th=[15795], 00:18:22.952 | 30.00th=[15926], 40.00th=[16057], 50.00th=[16319], 60.00th=[16450], 00:18:22.952 | 70.00th=[16581], 80.00th=[16909], 90.00th=[17433], 95.00th=[18744], 00:18:22.952 | 99.00th=[25035], 99.50th=[26870], 99.90th=[30278], 99.95th=[31065], 00:18:22.952 | 99.99th=[31589] 00:18:22.952 write: IOPS=12.3k, BW=48.1MiB/s (50.4MB/s)(256MiB/5325msec); 0 zone resets 00:18:22.952 slat (usec): min=4, max=1305, avg= 8.16, stdev=11.27 00:18:22.952 clat (usec): min=642, max=59348, avg=10351.84, stdev=12497.23 00:18:22.952 lat (usec): min=650, max=59356, avg=10360.00, stdev=12497.26 00:18:22.952 clat percentiles (usec): 00:18:22.952 | 1.00th=[ 963], 5.00th=[ 1156], 10.00th=[ 1319], 20.00th=[ 1532], 00:18:22.952 | 30.00th=[ 1713], 40.00th=[ 2311], 50.00th=[ 6521], 60.00th=[ 8029], 00:18:22.952 | 70.00th=[ 9634], 80.00th=[12649], 90.00th=[35390], 95.00th=[36963], 00:18:22.952 | 99.00th=[50070], 99.50th=[53216], 99.90th=[56361], 99.95th=[57410], 00:18:22.952 | 99.99th=[58459] 00:18:22.952 bw ( KiB/s): min=27112, max=69128, per=96.82%, avg=47662.55, stdev=10710.90, samples=11 00:18:22.952 iops : min= 6778, max=17282, avg=11915.64, stdev=2677.73, samples=11 00:18:22.952 lat (usec) : 750=0.03%, 1000=0.72% 00:18:22.952 lat (msec) : 2=18.31%, 4=2.01%, 10=14.92%, 20=54.38%, 50=9.12% 00:18:22.952 lat (msec) : 100=0.51% 00:18:22.952 cpu : usr=98.55%, sys=0.58%, ctx=23, majf=0, minf=5566 00:18:22.952 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:22.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.952 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:22.952 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.952 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:22.952 00:18:22.952 Run status group 0 (all jobs): 00:18:22.952 READ: bw=30.2MiB/s (31.7MB/s), 30.2MiB/s-30.2MiB/s (31.7MB/s-31.7MB/s), io=255MiB (267MB), run=8424-8424msec 00:18:22.952 WRITE: bw=48.1MiB/s (50.4MB/s), 48.1MiB/s-48.1MiB/s (50.4MB/s-50.4MB/s), io=256MiB (268MB), run=5325-5325msec 00:18:23.888 ----------------------------------------------------- 00:18:23.888 Suppressions used: 00:18:23.888 count bytes template 00:18:23.888 1 5 /usr/src/fio/parse.c 00:18:23.888 2 192 /usr/src/fio/iolog.c 00:18:23.888 1 8 libtcmalloc_minimal.so 00:18:23.888 1 904 libcrypto.so 00:18:23.888 ----------------------------------------------------- 00:18:23.888 00:18:23.888 15:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:23.888 15:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:23.888 15:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:23.888 15:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:23.888 Remove shared memory files 00:18:23.888 15:23:06 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:23.888 15:23:06 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:23.888 15:23:06 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:23.888 15:23:06 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:23.888 15:23:06 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57727 /dev/shm/spdk_tgt_trace.pid72979 00:18:23.888 15:23:06 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:23.888 15:23:06 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:23.888 ************************************ 00:18:23.888 END TEST ftl_fio_basic 00:18:23.888 ************************************ 00:18:23.888 00:18:23.888 real 1m11.686s 00:18:23.888 user 2m36.786s 00:18:23.888 sys 0m4.050s 00:18:23.888 15:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:23.888 15:23:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:23.888 15:23:06 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:23.888 15:23:06 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:23.888 15:23:06 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:23.888 15:23:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:23.888 ************************************ 00:18:23.888 START TEST ftl_bdevperf 00:18:23.888 ************************************ 00:18:23.888 15:23:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:23.888 * Looking for test storage... 00:18:23.888 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:23.888 15:23:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:18:23.888 15:23:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1689 -- # lcov --version 00:18:23.888 15:23:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:18:24.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.147 --rc genhtml_branch_coverage=1 00:18:24.147 --rc genhtml_function_coverage=1 00:18:24.147 --rc genhtml_legend=1 00:18:24.147 --rc geninfo_all_blocks=1 00:18:24.147 --rc geninfo_unexecuted_blocks=1 00:18:24.147 00:18:24.147 ' 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:18:24.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.147 --rc genhtml_branch_coverage=1 00:18:24.147 --rc genhtml_function_coverage=1 00:18:24.147 --rc genhtml_legend=1 00:18:24.147 --rc geninfo_all_blocks=1 00:18:24.147 --rc geninfo_unexecuted_blocks=1 00:18:24.147 00:18:24.147 ' 00:18:24.147 15:23:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:18:24.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.147 --rc genhtml_branch_coverage=1 00:18:24.148 --rc genhtml_function_coverage=1 00:18:24.148 --rc genhtml_legend=1 00:18:24.148 --rc geninfo_all_blocks=1 00:18:24.148 --rc geninfo_unexecuted_blocks=1 00:18:24.148 00:18:24.148 ' 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:18:24.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.148 --rc genhtml_branch_coverage=1 00:18:24.148 --rc genhtml_function_coverage=1 00:18:24.148 --rc genhtml_legend=1 00:18:24.148 --rc geninfo_all_blocks=1 00:18:24.148 --rc geninfo_unexecuted_blocks=1 00:18:24.148 00:18:24.148 ' 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75001 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75001 00:18:24.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 75001 ']' 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:24.148 15:23:06 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:24.148 [2024-10-25 15:23:06.828597] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:18:24.148 [2024-10-25 15:23:06.828726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75001 ] 00:18:24.406 [2024-10-25 15:23:07.007892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.406 [2024-10-25 15:23:07.122140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.989 15:23:07 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:24.989 15:23:07 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:18:24.989 15:23:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:24.989 15:23:07 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:24.989 15:23:07 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:24.989 15:23:07 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:24.989 15:23:07 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:24.989 15:23:07 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:25.556 15:23:07 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:25.556 15:23:07 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:25.556 15:23:07 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:25.556 15:23:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:25.556 15:23:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:25.556 15:23:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:25.556 15:23:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:25.556 15:23:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:25.556 15:23:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:25.556 { 00:18:25.556 "name": "nvme0n1", 00:18:25.556 "aliases": [ 00:18:25.556 "31adb17c-9281-4c53-8642-671685f3b7b4" 00:18:25.556 ], 00:18:25.556 "product_name": "NVMe disk", 00:18:25.556 "block_size": 4096, 00:18:25.556 "num_blocks": 1310720, 00:18:25.556 "uuid": "31adb17c-9281-4c53-8642-671685f3b7b4", 00:18:25.556 "numa_id": -1, 00:18:25.556 "assigned_rate_limits": { 00:18:25.556 "rw_ios_per_sec": 0, 00:18:25.556 "rw_mbytes_per_sec": 0, 00:18:25.556 "r_mbytes_per_sec": 0, 00:18:25.556 "w_mbytes_per_sec": 0 00:18:25.556 }, 00:18:25.556 "claimed": true, 00:18:25.556 "claim_type": "read_many_write_one", 00:18:25.556 "zoned": false, 00:18:25.556 "supported_io_types": { 00:18:25.556 "read": true, 00:18:25.556 "write": true, 00:18:25.556 "unmap": true, 00:18:25.556 "flush": true, 00:18:25.556 "reset": true, 00:18:25.556 "nvme_admin": true, 00:18:25.556 "nvme_io": true, 00:18:25.556 "nvme_io_md": false, 00:18:25.556 "write_zeroes": true, 00:18:25.556 "zcopy": false, 00:18:25.556 "get_zone_info": false, 00:18:25.556 "zone_management": false, 00:18:25.556 "zone_append": false, 00:18:25.556 "compare": true, 00:18:25.556 "compare_and_write": false, 00:18:25.556 "abort": true, 00:18:25.556 "seek_hole": false, 00:18:25.556 "seek_data": false, 00:18:25.556 "copy": true, 00:18:25.556 "nvme_iov_md": false 00:18:25.556 }, 00:18:25.556 "driver_specific": { 00:18:25.556 "nvme": [ 00:18:25.556 { 00:18:25.556 "pci_address": "0000:00:11.0", 00:18:25.556 "trid": { 00:18:25.556 "trtype": "PCIe", 00:18:25.556 "traddr": "0000:00:11.0" 00:18:25.556 }, 00:18:25.556 "ctrlr_data": { 00:18:25.556 "cntlid": 0, 00:18:25.556 "vendor_id": "0x1b36", 00:18:25.556 "model_number": "QEMU NVMe Ctrl", 00:18:25.556 "serial_number": "12341", 00:18:25.556 "firmware_revision": "8.0.0", 00:18:25.556 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:25.557 "oacs": { 00:18:25.557 "security": 0, 00:18:25.557 "format": 1, 00:18:25.557 "firmware": 0, 00:18:25.557 "ns_manage": 1 00:18:25.557 }, 00:18:25.557 "multi_ctrlr": false, 00:18:25.557 "ana_reporting": false 00:18:25.557 }, 00:18:25.557 "vs": { 00:18:25.557 "nvme_version": "1.4" 00:18:25.557 }, 00:18:25.557 "ns_data": { 00:18:25.557 "id": 1, 00:18:25.557 "can_share": false 00:18:25.557 } 00:18:25.557 } 00:18:25.557 ], 00:18:25.557 "mp_policy": "active_passive" 00:18:25.557 } 00:18:25.557 } 00:18:25.557 ]' 00:18:25.557 15:23:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:25.557 15:23:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:25.557 15:23:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:25.818 15:23:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:25.818 15:23:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:25.818 15:23:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:18:25.818 15:23:08 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:25.818 15:23:08 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:25.818 15:23:08 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:25.818 15:23:08 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:25.818 15:23:08 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:25.818 15:23:08 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=60050628-e438-48f8-ae17-e7e7cafbeb3a 00:18:25.818 15:23:08 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:25.818 15:23:08 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 60050628-e438-48f8-ae17-e7e7cafbeb3a 00:18:26.076 15:23:08 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:26.335 15:23:08 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=7782f962-dfab-4189-9bca-a865b3179a76 00:18:26.336 15:23:08 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7782f962-dfab-4189-9bca-a865b3179a76 00:18:26.595 15:23:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=d55e1c4c-1b38-4db4-b47f-3ac9cf6d501d 00:18:26.595 15:23:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d55e1c4c-1b38-4db4-b47f-3ac9cf6d501d 00:18:26.595 15:23:09 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:26.595 15:23:09 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:26.595 15:23:09 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=d55e1c4c-1b38-4db4-b47f-3ac9cf6d501d 00:18:26.595 15:23:09 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:26.595 15:23:09 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size d55e1c4c-1b38-4db4-b47f-3ac9cf6d501d 00:18:26.595 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=d55e1c4c-1b38-4db4-b47f-3ac9cf6d501d 00:18:26.595 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:26.595 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:26.595 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:26.595 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d55e1c4c-1b38-4db4-b47f-3ac9cf6d501d 00:18:26.854 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:26.854 { 00:18:26.854 "name": "d55e1c4c-1b38-4db4-b47f-3ac9cf6d501d", 00:18:26.854 "aliases": [ 00:18:26.854 "lvs/nvme0n1p0" 00:18:26.854 ], 00:18:26.854 "product_name": "Logical Volume", 00:18:26.854 "block_size": 4096, 00:18:26.854 "num_blocks": 26476544, 00:18:26.854 "uuid": "d55e1c4c-1b38-4db4-b47f-3ac9cf6d501d", 00:18:26.854 "assigned_rate_limits": { 00:18:26.854 "rw_ios_per_sec": 0, 00:18:26.854 "rw_mbytes_per_sec": 0, 00:18:26.854 "r_mbytes_per_sec": 0, 00:18:26.854 "w_mbytes_per_sec": 0 00:18:26.854 }, 00:18:26.854 "claimed": false, 00:18:26.854 "zoned": false, 00:18:26.854 "supported_io_types": { 00:18:26.854 "read": true, 00:18:26.854 "write": true, 00:18:26.854 "unmap": true, 00:18:26.854 "flush": false, 00:18:26.854 "reset": true, 00:18:26.854 "nvme_admin": false, 00:18:26.854 "nvme_io": false, 00:18:26.854 "nvme_io_md": false, 00:18:26.854 "write_zeroes": true, 00:18:26.854 "zcopy": false, 00:18:26.854 "get_zone_info": false, 00:18:26.854 "zone_management": false, 00:18:26.854 "zone_append": false, 00:18:26.854 "compare": false, 00:18:26.854 "compare_and_write": false, 00:18:26.854 "abort": false, 00:18:26.854 "seek_hole": true, 00:18:26.854 "seek_data": true, 00:18:26.854 "copy": false, 00:18:26.854 "nvme_iov_md": false 00:18:26.854 }, 00:18:26.854 "driver_specific": { 00:18:26.854 "lvol": { 00:18:26.854 "lvol_store_uuid": "7782f962-dfab-4189-9bca-a865b3179a76", 00:18:26.854 "base_bdev": "nvme0n1", 00:18:26.854 "thin_provision": true, 00:18:26.854 "num_allocated_clusters": 0, 00:18:26.854 "snapshot": false, 00:18:26.854 "clone": false, 00:18:26.854 "esnap_clone": false 00:18:26.854 } 00:18:26.854 } 00:18:26.854 } 00:18:26.854 ]' 00:18:26.854 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:26.854 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:26.854 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:26.854 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:26.854 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:26.854 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:26.854 15:23:09 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:26.854 15:23:09 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:26.854 15:23:09 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:27.111 15:23:09 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:27.111 15:23:09 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:27.111 15:23:09 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size d55e1c4c-1b38-4db4-b47f-3ac9cf6d501d 00:18:27.111 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=d55e1c4c-1b38-4db4-b47f-3ac9cf6d501d 00:18:27.111 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:27.111 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:27.111 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:27.111 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d55e1c4c-1b38-4db4-b47f-3ac9cf6d501d 00:18:27.368 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:27.368 { 00:18:27.368 "name": "d55e1c4c-1b38-4db4-b47f-3ac9cf6d501d", 00:18:27.368 "aliases": [ 00:18:27.368 "lvs/nvme0n1p0" 00:18:27.368 ], 00:18:27.368 "product_name": "Logical Volume", 00:18:27.368 "block_size": 4096, 00:18:27.368 "num_blocks": 26476544, 00:18:27.368 "uuid": "d55e1c4c-1b38-4db4-b47f-3ac9cf6d501d", 00:18:27.368 "assigned_rate_limits": { 00:18:27.368 "rw_ios_per_sec": 0, 00:18:27.368 "rw_mbytes_per_sec": 0, 00:18:27.368 "r_mbytes_per_sec": 0, 00:18:27.369 "w_mbytes_per_sec": 0 00:18:27.369 }, 00:18:27.369 "claimed": false, 00:18:27.369 "zoned": false, 00:18:27.369 "supported_io_types": { 00:18:27.369 "read": true, 00:18:27.369 "write": true, 00:18:27.369 "unmap": true, 00:18:27.369 "flush": false, 00:18:27.369 "reset": true, 00:18:27.369 "nvme_admin": false, 00:18:27.369 "nvme_io": false, 00:18:27.369 "nvme_io_md": false, 00:18:27.369 "write_zeroes": true, 00:18:27.369 "zcopy": false, 00:18:27.369 "get_zone_info": false, 00:18:27.369 "zone_management": false, 00:18:27.369 "zone_append": false, 00:18:27.369 "compare": false, 00:18:27.369 "compare_and_write": false, 00:18:27.369 "abort": false, 00:18:27.369 "seek_hole": true, 00:18:27.369 "seek_data": true, 00:18:27.369 "copy": false, 00:18:27.369 "nvme_iov_md": false 00:18:27.369 }, 00:18:27.369 "driver_specific": { 00:18:27.369 "lvol": { 00:18:27.369 "lvol_store_uuid": "7782f962-dfab-4189-9bca-a865b3179a76", 00:18:27.369 "base_bdev": "nvme0n1", 00:18:27.369 "thin_provision": true, 00:18:27.369 "num_allocated_clusters": 0, 00:18:27.369 "snapshot": false, 00:18:27.369 "clone": false, 00:18:27.369 "esnap_clone": false 00:18:27.369 } 00:18:27.369 } 00:18:27.369 } 00:18:27.369 ]' 00:18:27.369 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:27.369 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:27.369 15:23:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:27.369 15:23:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:27.369 15:23:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:27.369 15:23:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:27.369 15:23:10 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:27.369 15:23:10 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:27.645 15:23:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:27.645 15:23:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size d55e1c4c-1b38-4db4-b47f-3ac9cf6d501d 00:18:27.645 15:23:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=d55e1c4c-1b38-4db4-b47f-3ac9cf6d501d 00:18:27.645 15:23:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:27.645 15:23:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:27.645 15:23:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:27.645 15:23:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d55e1c4c-1b38-4db4-b47f-3ac9cf6d501d 00:18:27.904 15:23:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:27.904 { 00:18:27.904 "name": "d55e1c4c-1b38-4db4-b47f-3ac9cf6d501d", 00:18:27.904 "aliases": [ 00:18:27.904 "lvs/nvme0n1p0" 00:18:27.904 ], 00:18:27.904 "product_name": "Logical Volume", 00:18:27.904 "block_size": 4096, 00:18:27.904 "num_blocks": 26476544, 00:18:27.904 "uuid": "d55e1c4c-1b38-4db4-b47f-3ac9cf6d501d", 00:18:27.904 "assigned_rate_limits": { 00:18:27.904 "rw_ios_per_sec": 0, 00:18:27.904 "rw_mbytes_per_sec": 0, 00:18:27.904 "r_mbytes_per_sec": 0, 00:18:27.904 "w_mbytes_per_sec": 0 00:18:27.904 }, 00:18:27.904 "claimed": false, 00:18:27.904 "zoned": false, 00:18:27.904 "supported_io_types": { 00:18:27.904 "read": true, 00:18:27.904 "write": true, 00:18:27.904 "unmap": true, 00:18:27.904 "flush": false, 00:18:27.904 "reset": true, 00:18:27.904 "nvme_admin": false, 00:18:27.904 "nvme_io": false, 00:18:27.904 "nvme_io_md": false, 00:18:27.904 "write_zeroes": true, 00:18:27.904 "zcopy": false, 00:18:27.904 "get_zone_info": false, 00:18:27.904 "zone_management": false, 00:18:27.904 "zone_append": false, 00:18:27.904 "compare": false, 00:18:27.904 "compare_and_write": false, 00:18:27.904 "abort": false, 00:18:27.904 "seek_hole": true, 00:18:27.904 "seek_data": true, 00:18:27.904 "copy": false, 00:18:27.904 "nvme_iov_md": false 00:18:27.904 }, 00:18:27.904 "driver_specific": { 00:18:27.904 "lvol": { 00:18:27.904 "lvol_store_uuid": "7782f962-dfab-4189-9bca-a865b3179a76", 00:18:27.904 "base_bdev": "nvme0n1", 00:18:27.904 "thin_provision": true, 00:18:27.904 "num_allocated_clusters": 0, 00:18:27.904 "snapshot": false, 00:18:27.904 "clone": false, 00:18:27.904 "esnap_clone": false 00:18:27.904 } 00:18:27.904 } 00:18:27.904 } 00:18:27.904 ]' 00:18:27.904 15:23:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:27.904 15:23:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:27.905 15:23:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:27.905 15:23:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:27.905 15:23:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:27.905 15:23:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:27.905 15:23:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:27.905 15:23:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d55e1c4c-1b38-4db4-b47f-3ac9cf6d501d -c nvc0n1p0 --l2p_dram_limit 20 00:18:28.164 [2024-10-25 15:23:10.739353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.164 [2024-10-25 15:23:10.739418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:28.164 [2024-10-25 15:23:10.739435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:18:28.164 [2024-10-25 15:23:10.739449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.164 [2024-10-25 15:23:10.739514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.164 [2024-10-25 15:23:10.739529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:28.164 [2024-10-25 15:23:10.739541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:18:28.164 [2024-10-25 15:23:10.739557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.164 [2024-10-25 15:23:10.739576] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:28.164 [2024-10-25 15:23:10.740588] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:28.164 [2024-10-25 15:23:10.740617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.164 [2024-10-25 15:23:10.740635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:28.164 [2024-10-25 15:23:10.740646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:18:28.164 [2024-10-25 15:23:10.740659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.164 [2024-10-25 15:23:10.740699] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID befa34e6-067d-4a85-845a-220eb94a2424 00:18:28.164 [2024-10-25 15:23:10.742199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.164 [2024-10-25 15:23:10.742233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:28.164 [2024-10-25 15:23:10.742248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:28.164 [2024-10-25 15:23:10.742262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.164 [2024-10-25 15:23:10.749623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.164 [2024-10-25 15:23:10.749654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:28.164 [2024-10-25 15:23:10.749670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.326 ms 00:18:28.164 [2024-10-25 15:23:10.749680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.164 [2024-10-25 15:23:10.749782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.164 [2024-10-25 15:23:10.749797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:28.164 [2024-10-25 15:23:10.749819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:18:28.164 [2024-10-25 15:23:10.749829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.164 [2024-10-25 15:23:10.749881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.164 [2024-10-25 15:23:10.749893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:28.164 [2024-10-25 15:23:10.749906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:28.164 [2024-10-25 15:23:10.749916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.164 [2024-10-25 15:23:10.749942] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:28.164 [2024-10-25 15:23:10.755061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.164 [2024-10-25 15:23:10.755099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:28.164 [2024-10-25 15:23:10.755112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.136 ms 00:18:28.164 [2024-10-25 15:23:10.755126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.164 [2024-10-25 15:23:10.755159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.164 [2024-10-25 15:23:10.755189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:28.164 [2024-10-25 15:23:10.755200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:28.164 [2024-10-25 15:23:10.755213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.164 [2024-10-25 15:23:10.755267] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:28.164 [2024-10-25 15:23:10.755414] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:28.164 [2024-10-25 15:23:10.755432] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:28.164 [2024-10-25 15:23:10.755449] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:28.164 [2024-10-25 15:23:10.755462] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:28.164 [2024-10-25 15:23:10.755477] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:28.164 [2024-10-25 15:23:10.755488] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:28.164 [2024-10-25 15:23:10.755501] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:28.164 [2024-10-25 15:23:10.755511] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:28.165 [2024-10-25 15:23:10.755523] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:28.165 [2024-10-25 15:23:10.755534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.165 [2024-10-25 15:23:10.755546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:28.165 [2024-10-25 15:23:10.755557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:18:28.165 [2024-10-25 15:23:10.755573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.165 [2024-10-25 15:23:10.755663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.165 [2024-10-25 15:23:10.755685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:28.165 [2024-10-25 15:23:10.755696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:28.165 [2024-10-25 15:23:10.755711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.165 [2024-10-25 15:23:10.755796] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:28.165 [2024-10-25 15:23:10.755811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:28.165 [2024-10-25 15:23:10.755821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:28.165 [2024-10-25 15:23:10.755835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.165 [2024-10-25 15:23:10.755848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:28.165 [2024-10-25 15:23:10.755860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:28.165 [2024-10-25 15:23:10.755869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:28.165 [2024-10-25 15:23:10.755881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:28.165 [2024-10-25 15:23:10.755891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:28.165 [2024-10-25 15:23:10.755902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:28.165 [2024-10-25 15:23:10.755911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:28.165 [2024-10-25 15:23:10.755923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:28.165 [2024-10-25 15:23:10.755933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:28.165 [2024-10-25 15:23:10.755955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:28.165 [2024-10-25 15:23:10.755964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:28.165 [2024-10-25 15:23:10.755978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.165 [2024-10-25 15:23:10.755988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:28.165 [2024-10-25 15:23:10.756000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:28.165 [2024-10-25 15:23:10.756009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.165 [2024-10-25 15:23:10.756023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:28.165 [2024-10-25 15:23:10.756033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:28.165 [2024-10-25 15:23:10.756045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.165 [2024-10-25 15:23:10.756054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:28.165 [2024-10-25 15:23:10.756066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:28.165 [2024-10-25 15:23:10.756078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.165 [2024-10-25 15:23:10.756090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:28.165 [2024-10-25 15:23:10.756099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:28.165 [2024-10-25 15:23:10.756111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.165 [2024-10-25 15:23:10.756120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:28.165 [2024-10-25 15:23:10.756132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:28.165 [2024-10-25 15:23:10.756141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.165 [2024-10-25 15:23:10.756156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:28.165 [2024-10-25 15:23:10.756165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:28.165 [2024-10-25 15:23:10.756187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:28.165 [2024-10-25 15:23:10.756197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:28.165 [2024-10-25 15:23:10.756209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:28.165 [2024-10-25 15:23:10.756218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:28.165 [2024-10-25 15:23:10.756230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:28.165 [2024-10-25 15:23:10.756239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:28.165 [2024-10-25 15:23:10.756251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.165 [2024-10-25 15:23:10.756260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:28.165 [2024-10-25 15:23:10.756272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:28.165 [2024-10-25 15:23:10.756281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.165 [2024-10-25 15:23:10.756293] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:28.165 [2024-10-25 15:23:10.756303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:28.165 [2024-10-25 15:23:10.756315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:28.165 [2024-10-25 15:23:10.756325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.165 [2024-10-25 15:23:10.756342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:28.165 [2024-10-25 15:23:10.756352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:28.165 [2024-10-25 15:23:10.756364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:28.165 [2024-10-25 15:23:10.756373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:28.165 [2024-10-25 15:23:10.756384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:28.165 [2024-10-25 15:23:10.756394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:28.165 [2024-10-25 15:23:10.756411] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:28.165 [2024-10-25 15:23:10.756424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:28.165 [2024-10-25 15:23:10.756438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:28.165 [2024-10-25 15:23:10.756450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:28.165 [2024-10-25 15:23:10.756463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:28.165 [2024-10-25 15:23:10.756475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:28.165 [2024-10-25 15:23:10.756488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:28.165 [2024-10-25 15:23:10.756498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:28.165 [2024-10-25 15:23:10.756511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:28.165 [2024-10-25 15:23:10.756521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:28.165 [2024-10-25 15:23:10.756536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:28.165 [2024-10-25 15:23:10.756547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:28.165 [2024-10-25 15:23:10.756559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:28.165 [2024-10-25 15:23:10.756570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:28.165 [2024-10-25 15:23:10.756582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:28.165 [2024-10-25 15:23:10.756593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:28.165 [2024-10-25 15:23:10.756605] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:28.165 [2024-10-25 15:23:10.756617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:28.165 [2024-10-25 15:23:10.756632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:28.165 [2024-10-25 15:23:10.756643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:28.165 [2024-10-25 15:23:10.756656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:28.165 [2024-10-25 15:23:10.756667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:28.165 [2024-10-25 15:23:10.756680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.165 [2024-10-25 15:23:10.756690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:28.165 [2024-10-25 15:23:10.756706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.938 ms 00:18:28.165 [2024-10-25 15:23:10.756716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.165 [2024-10-25 15:23:10.756757] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:28.165 [2024-10-25 15:23:10.756770] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:32.412 [2024-10-25 15:23:14.307136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.412 [2024-10-25 15:23:14.307380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:32.412 [2024-10-25 15:23:14.307415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3556.137 ms 00:18:32.412 [2024-10-25 15:23:14.307431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.412 [2024-10-25 15:23:14.342684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.412 [2024-10-25 15:23:14.342887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:32.412 [2024-10-25 15:23:14.342929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.933 ms 00:18:32.412 [2024-10-25 15:23:14.342941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.412 [2024-10-25 15:23:14.343087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.412 [2024-10-25 15:23:14.343101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:32.412 [2024-10-25 15:23:14.343118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:32.412 [2024-10-25 15:23:14.343128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.412 [2024-10-25 15:23:14.398072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.412 [2024-10-25 15:23:14.398117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:32.412 [2024-10-25 15:23:14.398135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.956 ms 00:18:32.412 [2024-10-25 15:23:14.398146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.412 [2024-10-25 15:23:14.398198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.412 [2024-10-25 15:23:14.398211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:32.412 [2024-10-25 15:23:14.398224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:32.412 [2024-10-25 15:23:14.398238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.412 [2024-10-25 15:23:14.398718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.412 [2024-10-25 15:23:14.398740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:32.412 [2024-10-25 15:23:14.398753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:18:32.412 [2024-10-25 15:23:14.398763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.412 [2024-10-25 15:23:14.398874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.412 [2024-10-25 15:23:14.398888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:32.412 [2024-10-25 15:23:14.398914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:18:32.412 [2024-10-25 15:23:14.398924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.412 [2024-10-25 15:23:14.417074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.412 [2024-10-25 15:23:14.417355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:32.412 [2024-10-25 15:23:14.417386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.159 ms 00:18:32.412 [2024-10-25 15:23:14.417405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.412 [2024-10-25 15:23:14.429310] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:32.412 [2024-10-25 15:23:14.435218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.412 [2024-10-25 15:23:14.435259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:32.412 [2024-10-25 15:23:14.435274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.737 ms 00:18:32.412 [2024-10-25 15:23:14.435286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.412 [2024-10-25 15:23:14.528269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.412 [2024-10-25 15:23:14.528347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:32.412 [2024-10-25 15:23:14.528364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.098 ms 00:18:32.412 [2024-10-25 15:23:14.528378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.412 [2024-10-25 15:23:14.528565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.412 [2024-10-25 15:23:14.528584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:32.412 [2024-10-25 15:23:14.528595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:18:32.413 [2024-10-25 15:23:14.528609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.413 [2024-10-25 15:23:14.565790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.413 [2024-10-25 15:23:14.565854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:32.413 [2024-10-25 15:23:14.565870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.186 ms 00:18:32.413 [2024-10-25 15:23:14.565883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.413 [2024-10-25 15:23:14.601643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.413 [2024-10-25 15:23:14.601838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:32.413 [2024-10-25 15:23:14.601863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.775 ms 00:18:32.413 [2024-10-25 15:23:14.601876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.413 [2024-10-25 15:23:14.602565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.413 [2024-10-25 15:23:14.602590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:32.413 [2024-10-25 15:23:14.602602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:18:32.413 [2024-10-25 15:23:14.602615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.413 [2024-10-25 15:23:14.705944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.413 [2024-10-25 15:23:14.706021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:32.413 [2024-10-25 15:23:14.706039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.439 ms 00:18:32.413 [2024-10-25 15:23:14.706054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.413 [2024-10-25 15:23:14.744321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.413 [2024-10-25 15:23:14.744386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:32.413 [2024-10-25 15:23:14.744403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.243 ms 00:18:32.413 [2024-10-25 15:23:14.744417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.413 [2024-10-25 15:23:14.782292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.413 [2024-10-25 15:23:14.782368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:32.413 [2024-10-25 15:23:14.782385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.887 ms 00:18:32.413 [2024-10-25 15:23:14.782397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.413 [2024-10-25 15:23:14.819733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.413 [2024-10-25 15:23:14.819928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:32.413 [2024-10-25 15:23:14.819952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.353 ms 00:18:32.413 [2024-10-25 15:23:14.819966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.413 [2024-10-25 15:23:14.820044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.413 [2024-10-25 15:23:14.820066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:32.413 [2024-10-25 15:23:14.820077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:32.413 [2024-10-25 15:23:14.820090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.413 [2024-10-25 15:23:14.820218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.413 [2024-10-25 15:23:14.820235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:32.413 [2024-10-25 15:23:14.820246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:32.413 [2024-10-25 15:23:14.820259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.413 [2024-10-25 15:23:14.821239] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4088.243 ms, result 0 00:18:32.413 { 00:18:32.413 "name": "ftl0", 00:18:32.413 "uuid": "befa34e6-067d-4a85-845a-220eb94a2424" 00:18:32.413 } 00:18:32.413 15:23:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:18:32.413 15:23:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:32.413 15:23:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:18:32.413 15:23:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:32.673 [2024-10-25 15:23:15.161294] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:32.673 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:32.673 Zero copy mechanism will not be used. 00:18:32.673 Running I/O for 4 seconds... 00:18:34.548 1528.00 IOPS, 101.47 MiB/s [2024-10-25T15:23:18.213Z] 1535.00 IOPS, 101.93 MiB/s [2024-10-25T15:23:19.650Z] 1592.00 IOPS, 105.72 MiB/s [2024-10-25T15:23:19.650Z] 1667.50 IOPS, 110.73 MiB/s 00:18:36.922 Latency(us) 00:18:36.922 [2024-10-25T15:23:19.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.922 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:36.922 ftl0 : 4.00 1666.85 110.69 0.00 0.00 629.83 183.42 2263.49 00:18:36.922 [2024-10-25T15:23:19.650Z] =================================================================================================================== 00:18:36.922 [2024-10-25T15:23:19.650Z] Total : 1666.85 110.69 0.00 0.00 629.83 183.42 2263.49 00:18:36.922 { 00:18:36.922 "results": [ 00:18:36.922 { 00:18:36.922 "job": "ftl0", 00:18:36.922 "core_mask": "0x1", 00:18:36.922 "workload": "randwrite", 00:18:36.922 "status": "finished", 00:18:36.922 "queue_depth": 1, 00:18:36.922 "io_size": 69632, 00:18:36.922 "runtime": 4.002158, 00:18:36.922 "iops": 1666.8507340289914, 00:18:36.922 "mibps": 110.6893065566127, 00:18:36.922 "io_failed": 0, 00:18:36.922 "io_timeout": 0, 00:18:36.922 "avg_latency_us": 629.8254163709252, 00:18:36.922 "min_latency_us": 183.4152610441767, 00:18:36.922 "max_latency_us": 2263.492369477912 00:18:36.922 } 00:18:36.922 ], 00:18:36.922 "core_count": 1 00:18:36.922 } 00:18:36.922 [2024-10-25 15:23:19.166605] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:36.922 15:23:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:36.922 [2024-10-25 15:23:19.264059] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:36.922 Running I/O for 4 seconds... 00:18:38.789 11312.00 IOPS, 44.19 MiB/s [2024-10-25T15:23:22.461Z] 10398.00 IOPS, 40.62 MiB/s [2024-10-25T15:23:23.395Z] 10429.67 IOPS, 40.74 MiB/s [2024-10-25T15:23:23.395Z] 10591.00 IOPS, 41.37 MiB/s 00:18:40.667 Latency(us) 00:18:40.667 [2024-10-25T15:23:23.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.667 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:40.667 ftl0 : 4.02 10580.22 41.33 0.00 0.00 12072.16 223.72 34741.98 00:18:40.667 [2024-10-25T15:23:23.395Z] =================================================================================================================== 00:18:40.667 [2024-10-25T15:23:23.395Z] Total : 10580.22 41.33 0.00 0.00 12072.16 0.00 34741.98 00:18:40.667 { 00:18:40.667 "results": [ 00:18:40.667 { 00:18:40.667 "job": "ftl0", 00:18:40.668 "core_mask": "0x1", 00:18:40.668 "workload": "randwrite", 00:18:40.668 "status": "finished", 00:18:40.668 "queue_depth": 128, 00:18:40.668 "io_size": 4096, 00:18:40.668 "runtime": 4.0157, 00:18:40.668 "iops": 10580.222626192195, 00:18:40.668 "mibps": 41.32899463356326, 00:18:40.668 "io_failed": 0, 00:18:40.668 "io_timeout": 0, 00:18:40.668 "avg_latency_us": 12072.16329863432, 00:18:40.668 "min_latency_us": 223.71726907630523, 00:18:40.668 "max_latency_us": 34741.97590361446 00:18:40.668 } 00:18:40.668 ], 00:18:40.668 "core_count": 1 00:18:40.668 } 00:18:40.668 [2024-10-25 15:23:23.284087] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:40.668 15:23:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:40.927 [2024-10-25 15:23:23.408390] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:40.927 Running I/O for 4 seconds... 00:18:42.801 8024.00 IOPS, 31.34 MiB/s [2024-10-25T15:23:26.496Z] 7979.50 IOPS, 31.17 MiB/s [2024-10-25T15:23:27.434Z] 7870.00 IOPS, 30.74 MiB/s [2024-10-25T15:23:27.434Z] 7887.50 IOPS, 30.81 MiB/s 00:18:44.706 Latency(us) 00:18:44.706 [2024-10-25T15:23:27.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.706 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:44.706 Verification LBA range: start 0x0 length 0x1400000 00:18:44.706 ftl0 : 4.01 7900.51 30.86 0.00 0.00 16155.09 259.91 35373.65 00:18:44.706 [2024-10-25T15:23:27.434Z] =================================================================================================================== 00:18:44.706 [2024-10-25T15:23:27.434Z] Total : 7900.51 30.86 0.00 0.00 16155.09 0.00 35373.65 00:18:44.706 [2024-10-25 15:23:27.430992] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:44.706 { 00:18:44.706 "results": [ 00:18:44.706 { 00:18:44.706 "job": "ftl0", 00:18:44.706 "core_mask": "0x1", 00:18:44.706 "workload": "verify", 00:18:44.706 "status": "finished", 00:18:44.706 "verify_range": { 00:18:44.706 "start": 0, 00:18:44.706 "length": 20971520 00:18:44.706 }, 00:18:44.706 "queue_depth": 128, 00:18:44.706 "io_size": 4096, 00:18:44.706 "runtime": 4.009489, 00:18:44.706 "iops": 7900.508019849911, 00:18:44.706 "mibps": 30.861359452538714, 00:18:44.706 "io_failed": 0, 00:18:44.706 "io_timeout": 0, 00:18:44.706 "avg_latency_us": 16155.092166779314, 00:18:44.706 "min_latency_us": 259.906827309237, 00:18:44.706 "max_latency_us": 35373.648192771085 00:18:44.706 } 00:18:44.706 ], 00:18:44.706 "core_count": 1 00:18:44.706 } 00:18:44.966 15:23:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:44.966 [2024-10-25 15:23:27.634234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.966 [2024-10-25 15:23:27.634511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:44.966 [2024-10-25 15:23:27.634538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:44.966 [2024-10-25 15:23:27.634556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.966 [2024-10-25 15:23:27.634593] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:44.966 [2024-10-25 15:23:27.638813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.966 [2024-10-25 15:23:27.638849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:44.966 [2024-10-25 15:23:27.638866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.203 ms 00:18:44.966 [2024-10-25 15:23:27.638876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.966 [2024-10-25 15:23:27.640515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.966 [2024-10-25 15:23:27.640556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:44.966 [2024-10-25 15:23:27.640572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.602 ms 00:18:44.966 [2024-10-25 15:23:27.640584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.226 [2024-10-25 15:23:27.843048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.226 [2024-10-25 15:23:27.843115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:45.226 [2024-10-25 15:23:27.843139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 202.762 ms 00:18:45.226 [2024-10-25 15:23:27.843151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.226 [2024-10-25 15:23:27.848214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.226 [2024-10-25 15:23:27.848364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:45.226 [2024-10-25 15:23:27.848389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.010 ms 00:18:45.226 [2024-10-25 15:23:27.848400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.226 [2024-10-25 15:23:27.884063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.226 [2024-10-25 15:23:27.884264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:45.226 [2024-10-25 15:23:27.884292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.659 ms 00:18:45.226 [2024-10-25 15:23:27.884303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.226 [2024-10-25 15:23:27.906137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.226 [2024-10-25 15:23:27.906193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:45.226 [2024-10-25 15:23:27.906216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.826 ms 00:18:45.226 [2024-10-25 15:23:27.906231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.226 [2024-10-25 15:23:27.906376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.226 [2024-10-25 15:23:27.906390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:45.226 [2024-10-25 15:23:27.906408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:18:45.226 [2024-10-25 15:23:27.906418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.226 [2024-10-25 15:23:27.942207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.226 [2024-10-25 15:23:27.942247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:45.226 [2024-10-25 15:23:27.942264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.826 ms 00:18:45.226 [2024-10-25 15:23:27.942274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.485 [2024-10-25 15:23:27.978150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.485 [2024-10-25 15:23:27.978227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:45.485 [2024-10-25 15:23:27.978248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.884 ms 00:18:45.485 [2024-10-25 15:23:27.978259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.485 [2024-10-25 15:23:28.016131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.485 [2024-10-25 15:23:28.016361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:45.485 [2024-10-25 15:23:28.016392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.860 ms 00:18:45.485 [2024-10-25 15:23:28.016403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.485 [2024-10-25 15:23:28.051784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.485 [2024-10-25 15:23:28.051928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:45.485 [2024-10-25 15:23:28.051970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.269 ms 00:18:45.485 [2024-10-25 15:23:28.051989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.485 [2024-10-25 15:23:28.052039] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:45.485 [2024-10-25 15:23:28.052058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:45.485 [2024-10-25 15:23:28.052074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:45.485 [2024-10-25 15:23:28.052085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:45.485 [2024-10-25 15:23:28.052099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:45.485 [2024-10-25 15:23:28.052110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.052993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:45.486 [2024-10-25 15:23:28.053311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:45.487 [2024-10-25 15:23:28.053321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:45.487 [2024-10-25 15:23:28.053334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:45.487 [2024-10-25 15:23:28.053353] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:45.487 [2024-10-25 15:23:28.053366] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: befa34e6-067d-4a85-845a-220eb94a2424 00:18:45.487 [2024-10-25 15:23:28.053377] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:45.487 [2024-10-25 15:23:28.053390] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:45.487 [2024-10-25 15:23:28.053400] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:45.487 [2024-10-25 15:23:28.053413] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:45.487 [2024-10-25 15:23:28.053426] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:45.487 [2024-10-25 15:23:28.053439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:45.487 [2024-10-25 15:23:28.053449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:45.487 [2024-10-25 15:23:28.053464] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:45.487 [2024-10-25 15:23:28.053473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:45.487 [2024-10-25 15:23:28.053486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.487 [2024-10-25 15:23:28.053496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:45.487 [2024-10-25 15:23:28.053509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.452 ms 00:18:45.487 [2024-10-25 15:23:28.053519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.487 [2024-10-25 15:23:28.073449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.487 [2024-10-25 15:23:28.073486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:45.487 [2024-10-25 15:23:28.073506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.902 ms 00:18:45.487 [2024-10-25 15:23:28.073516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.487 [2024-10-25 15:23:28.074025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.487 [2024-10-25 15:23:28.074036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:45.487 [2024-10-25 15:23:28.074049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:18:45.487 [2024-10-25 15:23:28.074059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.487 [2024-10-25 15:23:28.128842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.487 [2024-10-25 15:23:28.128994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:45.487 [2024-10-25 15:23:28.129024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.487 [2024-10-25 15:23:28.129035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.487 [2024-10-25 15:23:28.129095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.487 [2024-10-25 15:23:28.129106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:45.487 [2024-10-25 15:23:28.129119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.487 [2024-10-25 15:23:28.129129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.487 [2024-10-25 15:23:28.129263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.487 [2024-10-25 15:23:28.129279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:45.487 [2024-10-25 15:23:28.129296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.487 [2024-10-25 15:23:28.129306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.487 [2024-10-25 15:23:28.129326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.487 [2024-10-25 15:23:28.129337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:45.487 [2024-10-25 15:23:28.129349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.487 [2024-10-25 15:23:28.129359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.746 [2024-10-25 15:23:28.255733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.746 [2024-10-25 15:23:28.255800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:45.746 [2024-10-25 15:23:28.255827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.746 [2024-10-25 15:23:28.255837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.746 [2024-10-25 15:23:28.358379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.746 [2024-10-25 15:23:28.358429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:45.746 [2024-10-25 15:23:28.358448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.746 [2024-10-25 15:23:28.358458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.746 [2024-10-25 15:23:28.358570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.746 [2024-10-25 15:23:28.358583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:45.746 [2024-10-25 15:23:28.358597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.746 [2024-10-25 15:23:28.358611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.746 [2024-10-25 15:23:28.358667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.746 [2024-10-25 15:23:28.358679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:45.746 [2024-10-25 15:23:28.358691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.746 [2024-10-25 15:23:28.358701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.746 [2024-10-25 15:23:28.358829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.746 [2024-10-25 15:23:28.358843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:45.746 [2024-10-25 15:23:28.358859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.746 [2024-10-25 15:23:28.358869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.746 [2024-10-25 15:23:28.358924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.746 [2024-10-25 15:23:28.358936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:45.746 [2024-10-25 15:23:28.358949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.746 [2024-10-25 15:23:28.358960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.746 [2024-10-25 15:23:28.358998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.746 [2024-10-25 15:23:28.359009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:45.746 [2024-10-25 15:23:28.359022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.746 [2024-10-25 15:23:28.359032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.746 [2024-10-25 15:23:28.359081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.746 [2024-10-25 15:23:28.359103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:45.746 [2024-10-25 15:23:28.359116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.746 [2024-10-25 15:23:28.359126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.746 [2024-10-25 15:23:28.359272] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 726.176 ms, result 0 00:18:45.746 true 00:18:45.746 15:23:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75001 00:18:45.746 15:23:28 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 75001 ']' 00:18:45.746 15:23:28 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 75001 00:18:45.746 15:23:28 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:18:45.746 15:23:28 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:45.746 15:23:28 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75001 00:18:45.746 15:23:28 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:45.746 killing process with pid 75001 00:18:45.746 Received shutdown signal, test time was about 4.000000 seconds 00:18:45.746 00:18:45.746 Latency(us) 00:18:45.746 [2024-10-25T15:23:28.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.746 [2024-10-25T15:23:28.474Z] =================================================================================================================== 00:18:45.746 [2024-10-25T15:23:28.474Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:45.746 15:23:28 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:45.746 15:23:28 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75001' 00:18:45.746 15:23:28 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 75001 00:18:45.746 15:23:28 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 75001 00:18:49.930 Remove shared memory files 00:18:49.930 15:23:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:49.930 15:23:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:18:49.930 15:23:32 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:49.930 15:23:32 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:49.930 15:23:32 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:49.930 15:23:32 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:49.930 15:23:32 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:49.930 15:23:32 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:49.930 ************************************ 00:18:49.930 END TEST ftl_bdevperf 00:18:49.930 ************************************ 00:18:49.930 00:18:49.930 real 0m25.658s 00:18:49.930 user 0m28.249s 00:18:49.930 sys 0m1.279s 00:18:49.930 15:23:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:49.930 15:23:32 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:49.930 15:23:32 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:49.930 15:23:32 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:49.930 15:23:32 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:49.930 15:23:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:49.930 ************************************ 00:18:49.930 START TEST ftl_trim 00:18:49.930 ************************************ 00:18:49.930 15:23:32 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:49.930 * Looking for test storage... 00:18:49.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:49.930 15:23:32 ftl.ftl_trim -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:18:49.930 15:23:32 ftl.ftl_trim -- common/autotest_common.sh@1689 -- # lcov --version 00:18:49.930 15:23:32 ftl.ftl_trim -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:18:49.930 15:23:32 ftl.ftl_trim -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:49.930 15:23:32 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:18:49.930 15:23:32 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:49.930 15:23:32 ftl.ftl_trim -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:18:49.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.930 --rc genhtml_branch_coverage=1 00:18:49.930 --rc genhtml_function_coverage=1 00:18:49.930 --rc genhtml_legend=1 00:18:49.930 --rc geninfo_all_blocks=1 00:18:49.930 --rc geninfo_unexecuted_blocks=1 00:18:49.930 00:18:49.930 ' 00:18:49.930 15:23:32 ftl.ftl_trim -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:18:49.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.930 --rc genhtml_branch_coverage=1 00:18:49.930 --rc genhtml_function_coverage=1 00:18:49.930 --rc genhtml_legend=1 00:18:49.930 --rc geninfo_all_blocks=1 00:18:49.930 --rc geninfo_unexecuted_blocks=1 00:18:49.930 00:18:49.930 ' 00:18:49.930 15:23:32 ftl.ftl_trim -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:18:49.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.930 --rc genhtml_branch_coverage=1 00:18:49.930 --rc genhtml_function_coverage=1 00:18:49.930 --rc genhtml_legend=1 00:18:49.930 --rc geninfo_all_blocks=1 00:18:49.930 --rc geninfo_unexecuted_blocks=1 00:18:49.930 00:18:49.930 ' 00:18:49.930 15:23:32 ftl.ftl_trim -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:18:49.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.931 --rc genhtml_branch_coverage=1 00:18:49.931 --rc genhtml_function_coverage=1 00:18:49.931 --rc genhtml_legend=1 00:18:49.931 --rc geninfo_all_blocks=1 00:18:49.931 --rc geninfo_unexecuted_blocks=1 00:18:49.931 00:18:49.931 ' 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=75366 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:49.931 15:23:32 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 75366 00:18:49.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.931 15:23:32 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 75366 ']' 00:18:49.931 15:23:32 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.931 15:23:32 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:49.931 15:23:32 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.931 15:23:32 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:49.931 15:23:32 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:49.931 [2024-10-25 15:23:32.551548] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:18:49.931 [2024-10-25 15:23:32.551676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75366 ] 00:18:50.190 [2024-10-25 15:23:32.734376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:50.190 [2024-10-25 15:23:32.844968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.190 [2024-10-25 15:23:32.845107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.190 [2024-10-25 15:23:32.845141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.128 15:23:33 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:51.128 15:23:33 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:18:51.128 15:23:33 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:51.128 15:23:33 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:18:51.128 15:23:33 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:51.128 15:23:33 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:18:51.128 15:23:33 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:18:51.128 15:23:33 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:51.396 15:23:34 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:51.396 15:23:34 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:51.396 15:23:34 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:51.396 15:23:34 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:51.396 15:23:34 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:51.396 15:23:34 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:51.396 15:23:34 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:51.396 15:23:34 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:51.654 15:23:34 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:51.654 { 00:18:51.654 "name": "nvme0n1", 00:18:51.654 "aliases": [ 00:18:51.654 "f181bf97-96cf-4f9f-9fbf-6dd4d3298115" 00:18:51.654 ], 00:18:51.654 "product_name": "NVMe disk", 00:18:51.654 "block_size": 4096, 00:18:51.654 "num_blocks": 1310720, 00:18:51.654 "uuid": "f181bf97-96cf-4f9f-9fbf-6dd4d3298115", 00:18:51.654 "numa_id": -1, 00:18:51.654 "assigned_rate_limits": { 00:18:51.654 "rw_ios_per_sec": 0, 00:18:51.654 "rw_mbytes_per_sec": 0, 00:18:51.654 "r_mbytes_per_sec": 0, 00:18:51.654 "w_mbytes_per_sec": 0 00:18:51.654 }, 00:18:51.654 "claimed": true, 00:18:51.654 "claim_type": "read_many_write_one", 00:18:51.654 "zoned": false, 00:18:51.654 "supported_io_types": { 00:18:51.654 "read": true, 00:18:51.654 "write": true, 00:18:51.654 "unmap": true, 00:18:51.654 "flush": true, 00:18:51.654 "reset": true, 00:18:51.654 "nvme_admin": true, 00:18:51.654 "nvme_io": true, 00:18:51.654 "nvme_io_md": false, 00:18:51.654 "write_zeroes": true, 00:18:51.654 "zcopy": false, 00:18:51.654 "get_zone_info": false, 00:18:51.654 "zone_management": false, 00:18:51.654 "zone_append": false, 00:18:51.654 "compare": true, 00:18:51.654 "compare_and_write": false, 00:18:51.654 "abort": true, 00:18:51.654 "seek_hole": false, 00:18:51.654 "seek_data": false, 00:18:51.654 "copy": true, 00:18:51.654 "nvme_iov_md": false 00:18:51.654 }, 00:18:51.654 "driver_specific": { 00:18:51.654 "nvme": [ 00:18:51.654 { 00:18:51.654 "pci_address": "0000:00:11.0", 00:18:51.654 "trid": { 00:18:51.654 "trtype": "PCIe", 00:18:51.654 "traddr": "0000:00:11.0" 00:18:51.654 }, 00:18:51.654 "ctrlr_data": { 00:18:51.654 "cntlid": 0, 00:18:51.654 "vendor_id": "0x1b36", 00:18:51.654 "model_number": "QEMU NVMe Ctrl", 00:18:51.654 "serial_number": "12341", 00:18:51.654 "firmware_revision": "8.0.0", 00:18:51.654 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:51.654 "oacs": { 00:18:51.654 "security": 0, 00:18:51.654 "format": 1, 00:18:51.654 "firmware": 0, 00:18:51.654 "ns_manage": 1 00:18:51.654 }, 00:18:51.654 "multi_ctrlr": false, 00:18:51.654 "ana_reporting": false 00:18:51.654 }, 00:18:51.654 "vs": { 00:18:51.654 "nvme_version": "1.4" 00:18:51.654 }, 00:18:51.654 "ns_data": { 00:18:51.654 "id": 1, 00:18:51.654 "can_share": false 00:18:51.654 } 00:18:51.654 } 00:18:51.654 ], 00:18:51.654 "mp_policy": "active_passive" 00:18:51.654 } 00:18:51.654 } 00:18:51.654 ]' 00:18:51.654 15:23:34 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:51.654 15:23:34 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:51.654 15:23:34 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:51.654 15:23:34 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:51.655 15:23:34 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:51.655 15:23:34 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:18:51.655 15:23:34 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:51.655 15:23:34 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:51.655 15:23:34 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:51.655 15:23:34 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:51.655 15:23:34 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:51.913 15:23:34 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=7782f962-dfab-4189-9bca-a865b3179a76 00:18:51.913 15:23:34 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:18:51.913 15:23:34 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7782f962-dfab-4189-9bca-a865b3179a76 00:18:52.171 15:23:34 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:52.429 15:23:34 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=7af62b3b-b4df-4fa4-8a94-6082b2067501 00:18:52.429 15:23:34 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7af62b3b-b4df-4fa4-8a94-6082b2067501 00:18:52.429 15:23:35 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=d166a403-20c3-4203-9c44-587be3c4c6cd 00:18:52.429 15:23:35 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d166a403-20c3-4203-9c44-587be3c4c6cd 00:18:52.429 15:23:35 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:18:52.429 15:23:35 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:52.429 15:23:35 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=d166a403-20c3-4203-9c44-587be3c4c6cd 00:18:52.429 15:23:35 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:18:52.429 15:23:35 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size d166a403-20c3-4203-9c44-587be3c4c6cd 00:18:52.429 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=d166a403-20c3-4203-9c44-587be3c4c6cd 00:18:52.429 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:52.429 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:52.429 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:52.429 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d166a403-20c3-4203-9c44-587be3c4c6cd 00:18:52.688 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:52.688 { 00:18:52.688 "name": "d166a403-20c3-4203-9c44-587be3c4c6cd", 00:18:52.688 "aliases": [ 00:18:52.688 "lvs/nvme0n1p0" 00:18:52.688 ], 00:18:52.688 "product_name": "Logical Volume", 00:18:52.688 "block_size": 4096, 00:18:52.688 "num_blocks": 26476544, 00:18:52.688 "uuid": "d166a403-20c3-4203-9c44-587be3c4c6cd", 00:18:52.688 "assigned_rate_limits": { 00:18:52.688 "rw_ios_per_sec": 0, 00:18:52.688 "rw_mbytes_per_sec": 0, 00:18:52.688 "r_mbytes_per_sec": 0, 00:18:52.688 "w_mbytes_per_sec": 0 00:18:52.688 }, 00:18:52.688 "claimed": false, 00:18:52.688 "zoned": false, 00:18:52.688 "supported_io_types": { 00:18:52.688 "read": true, 00:18:52.688 "write": true, 00:18:52.688 "unmap": true, 00:18:52.688 "flush": false, 00:18:52.688 "reset": true, 00:18:52.688 "nvme_admin": false, 00:18:52.688 "nvme_io": false, 00:18:52.688 "nvme_io_md": false, 00:18:52.688 "write_zeroes": true, 00:18:52.688 "zcopy": false, 00:18:52.688 "get_zone_info": false, 00:18:52.688 "zone_management": false, 00:18:52.688 "zone_append": false, 00:18:52.688 "compare": false, 00:18:52.688 "compare_and_write": false, 00:18:52.688 "abort": false, 00:18:52.688 "seek_hole": true, 00:18:52.688 "seek_data": true, 00:18:52.688 "copy": false, 00:18:52.688 "nvme_iov_md": false 00:18:52.688 }, 00:18:52.688 "driver_specific": { 00:18:52.688 "lvol": { 00:18:52.688 "lvol_store_uuid": "7af62b3b-b4df-4fa4-8a94-6082b2067501", 00:18:52.688 "base_bdev": "nvme0n1", 00:18:52.688 "thin_provision": true, 00:18:52.688 "num_allocated_clusters": 0, 00:18:52.688 "snapshot": false, 00:18:52.688 "clone": false, 00:18:52.688 "esnap_clone": false 00:18:52.688 } 00:18:52.688 } 00:18:52.688 } 00:18:52.688 ]' 00:18:52.688 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:52.688 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:52.688 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:52.947 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:52.947 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:52.947 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:52.947 15:23:35 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:18:52.947 15:23:35 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:18:52.947 15:23:35 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:53.206 15:23:35 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:53.206 15:23:35 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:53.206 15:23:35 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size d166a403-20c3-4203-9c44-587be3c4c6cd 00:18:53.206 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=d166a403-20c3-4203-9c44-587be3c4c6cd 00:18:53.206 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:53.206 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:53.206 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:53.206 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d166a403-20c3-4203-9c44-587be3c4c6cd 00:18:53.464 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:53.464 { 00:18:53.464 "name": "d166a403-20c3-4203-9c44-587be3c4c6cd", 00:18:53.464 "aliases": [ 00:18:53.464 "lvs/nvme0n1p0" 00:18:53.464 ], 00:18:53.464 "product_name": "Logical Volume", 00:18:53.464 "block_size": 4096, 00:18:53.464 "num_blocks": 26476544, 00:18:53.464 "uuid": "d166a403-20c3-4203-9c44-587be3c4c6cd", 00:18:53.464 "assigned_rate_limits": { 00:18:53.464 "rw_ios_per_sec": 0, 00:18:53.464 "rw_mbytes_per_sec": 0, 00:18:53.464 "r_mbytes_per_sec": 0, 00:18:53.464 "w_mbytes_per_sec": 0 00:18:53.464 }, 00:18:53.464 "claimed": false, 00:18:53.464 "zoned": false, 00:18:53.464 "supported_io_types": { 00:18:53.464 "read": true, 00:18:53.464 "write": true, 00:18:53.464 "unmap": true, 00:18:53.464 "flush": false, 00:18:53.464 "reset": true, 00:18:53.464 "nvme_admin": false, 00:18:53.464 "nvme_io": false, 00:18:53.464 "nvme_io_md": false, 00:18:53.464 "write_zeroes": true, 00:18:53.465 "zcopy": false, 00:18:53.465 "get_zone_info": false, 00:18:53.465 "zone_management": false, 00:18:53.465 "zone_append": false, 00:18:53.465 "compare": false, 00:18:53.465 "compare_and_write": false, 00:18:53.465 "abort": false, 00:18:53.465 "seek_hole": true, 00:18:53.465 "seek_data": true, 00:18:53.465 "copy": false, 00:18:53.465 "nvme_iov_md": false 00:18:53.465 }, 00:18:53.465 "driver_specific": { 00:18:53.465 "lvol": { 00:18:53.465 "lvol_store_uuid": "7af62b3b-b4df-4fa4-8a94-6082b2067501", 00:18:53.465 "base_bdev": "nvme0n1", 00:18:53.465 "thin_provision": true, 00:18:53.465 "num_allocated_clusters": 0, 00:18:53.465 "snapshot": false, 00:18:53.465 "clone": false, 00:18:53.465 "esnap_clone": false 00:18:53.465 } 00:18:53.465 } 00:18:53.465 } 00:18:53.465 ]' 00:18:53.465 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:53.465 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:53.465 15:23:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:53.465 15:23:36 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:53.465 15:23:36 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:53.465 15:23:36 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:53.465 15:23:36 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:18:53.465 15:23:36 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:53.723 15:23:36 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:53.723 15:23:36 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:53.723 15:23:36 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size d166a403-20c3-4203-9c44-587be3c4c6cd 00:18:53.723 15:23:36 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=d166a403-20c3-4203-9c44-587be3c4c6cd 00:18:53.723 15:23:36 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:53.723 15:23:36 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:53.723 15:23:36 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:53.723 15:23:36 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d166a403-20c3-4203-9c44-587be3c4c6cd 00:18:53.723 15:23:36 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:53.723 { 00:18:53.723 "name": "d166a403-20c3-4203-9c44-587be3c4c6cd", 00:18:53.723 "aliases": [ 00:18:53.723 "lvs/nvme0n1p0" 00:18:53.723 ], 00:18:53.723 "product_name": "Logical Volume", 00:18:53.723 "block_size": 4096, 00:18:53.723 "num_blocks": 26476544, 00:18:53.723 "uuid": "d166a403-20c3-4203-9c44-587be3c4c6cd", 00:18:53.723 "assigned_rate_limits": { 00:18:53.723 "rw_ios_per_sec": 0, 00:18:53.723 "rw_mbytes_per_sec": 0, 00:18:53.723 "r_mbytes_per_sec": 0, 00:18:53.723 "w_mbytes_per_sec": 0 00:18:53.723 }, 00:18:53.723 "claimed": false, 00:18:53.723 "zoned": false, 00:18:53.723 "supported_io_types": { 00:18:53.723 "read": true, 00:18:53.723 "write": true, 00:18:53.723 "unmap": true, 00:18:53.723 "flush": false, 00:18:53.723 "reset": true, 00:18:53.723 "nvme_admin": false, 00:18:53.723 "nvme_io": false, 00:18:53.723 "nvme_io_md": false, 00:18:53.723 "write_zeroes": true, 00:18:53.723 "zcopy": false, 00:18:53.723 "get_zone_info": false, 00:18:53.723 "zone_management": false, 00:18:53.723 "zone_append": false, 00:18:53.723 "compare": false, 00:18:53.723 "compare_and_write": false, 00:18:53.723 "abort": false, 00:18:53.723 "seek_hole": true, 00:18:53.723 "seek_data": true, 00:18:53.723 "copy": false, 00:18:53.723 "nvme_iov_md": false 00:18:53.723 }, 00:18:53.723 "driver_specific": { 00:18:53.723 "lvol": { 00:18:53.723 "lvol_store_uuid": "7af62b3b-b4df-4fa4-8a94-6082b2067501", 00:18:53.723 "base_bdev": "nvme0n1", 00:18:53.723 "thin_provision": true, 00:18:53.723 "num_allocated_clusters": 0, 00:18:53.723 "snapshot": false, 00:18:53.723 "clone": false, 00:18:53.723 "esnap_clone": false 00:18:53.723 } 00:18:53.723 } 00:18:53.723 } 00:18:53.723 ]' 00:18:53.723 15:23:36 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:53.981 15:23:36 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:53.981 15:23:36 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:53.981 15:23:36 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:53.981 15:23:36 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:53.981 15:23:36 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:53.981 15:23:36 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:53.981 15:23:36 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d166a403-20c3-4203-9c44-587be3c4c6cd -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:54.240 [2024-10-25 15:23:36.719704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.240 [2024-10-25 15:23:36.719757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:54.240 [2024-10-25 15:23:36.719779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:54.240 [2024-10-25 15:23:36.719790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.240 [2024-10-25 15:23:36.722932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.240 [2024-10-25 15:23:36.722972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:54.240 [2024-10-25 15:23:36.722990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.114 ms 00:18:54.240 [2024-10-25 15:23:36.723000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.240 [2024-10-25 15:23:36.723130] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:54.240 [2024-10-25 15:23:36.724097] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:54.240 [2024-10-25 15:23:36.724133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.240 [2024-10-25 15:23:36.724145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:54.240 [2024-10-25 15:23:36.724158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:18:54.240 [2024-10-25 15:23:36.724168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.240 [2024-10-25 15:23:36.724287] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 807ee30e-b752-4504-9751-1143cda47acc 00:18:54.240 [2024-10-25 15:23:36.725650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.240 [2024-10-25 15:23:36.725797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:54.240 [2024-10-25 15:23:36.725817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:54.240 [2024-10-25 15:23:36.725829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.240 [2024-10-25 15:23:36.733303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.240 [2024-10-25 15:23:36.733432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:54.241 [2024-10-25 15:23:36.733509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.404 ms 00:18:54.241 [2024-10-25 15:23:36.733550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.241 [2024-10-25 15:23:36.733739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.241 [2024-10-25 15:23:36.733785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:54.241 [2024-10-25 15:23:36.733818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:54.241 [2024-10-25 15:23:36.733902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.241 [2024-10-25 15:23:36.733973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.241 [2024-10-25 15:23:36.734009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:54.241 [2024-10-25 15:23:36.734106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:54.241 [2024-10-25 15:23:36.734169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.241 [2024-10-25 15:23:36.734247] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:54.241 [2024-10-25 15:23:36.738853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.241 [2024-10-25 15:23:36.738988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:54.241 [2024-10-25 15:23:36.739063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.617 ms 00:18:54.241 [2024-10-25 15:23:36.739099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.241 [2024-10-25 15:23:36.739203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.241 [2024-10-25 15:23:36.739332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:54.241 [2024-10-25 15:23:36.739411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:54.241 [2024-10-25 15:23:36.739459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.241 [2024-10-25 15:23:36.739517] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:54.241 [2024-10-25 15:23:36.739665] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:54.241 [2024-10-25 15:23:36.739731] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:54.241 [2024-10-25 15:23:36.739788] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:54.241 [2024-10-25 15:23:36.739901] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:54.241 [2024-10-25 15:23:36.740013] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:54.241 [2024-10-25 15:23:36.740077] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:54.241 [2024-10-25 15:23:36.740145] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:54.241 [2024-10-25 15:23:36.740195] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:54.241 [2024-10-25 15:23:36.740269] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:54.241 [2024-10-25 15:23:36.740309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.241 [2024-10-25 15:23:36.740343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:54.241 [2024-10-25 15:23:36.740412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:18:54.241 [2024-10-25 15:23:36.740446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.241 [2024-10-25 15:23:36.740565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.241 [2024-10-25 15:23:36.740619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:54.241 [2024-10-25 15:23:36.740694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:54.241 [2024-10-25 15:23:36.740725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.241 [2024-10-25 15:23:36.740865] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:54.241 [2024-10-25 15:23:36.740964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:54.241 [2024-10-25 15:23:36.741034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:54.241 [2024-10-25 15:23:36.741064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.241 [2024-10-25 15:23:36.741097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:54.241 [2024-10-25 15:23:36.741127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:54.241 [2024-10-25 15:23:36.741159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:54.241 [2024-10-25 15:23:36.741201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:54.241 [2024-10-25 15:23:36.741237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:54.241 [2024-10-25 15:23:36.741328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:54.241 [2024-10-25 15:23:36.741368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:54.241 [2024-10-25 15:23:36.741398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:54.241 [2024-10-25 15:23:36.741430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:54.241 [2024-10-25 15:23:36.741460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:54.241 [2024-10-25 15:23:36.741493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:54.241 [2024-10-25 15:23:36.741572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.241 [2024-10-25 15:23:36.741613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:54.241 [2024-10-25 15:23:36.741643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:54.241 [2024-10-25 15:23:36.741675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.241 [2024-10-25 15:23:36.741704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:54.241 [2024-10-25 15:23:36.741738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:54.241 [2024-10-25 15:23:36.741848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.241 [2024-10-25 15:23:36.741883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:54.241 [2024-10-25 15:23:36.741913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:54.241 [2024-10-25 15:23:36.741944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.241 [2024-10-25 15:23:36.742017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:54.241 [2024-10-25 15:23:36.742057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:54.241 [2024-10-25 15:23:36.742086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.241 [2024-10-25 15:23:36.742119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:54.241 [2024-10-25 15:23:36.742216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:54.241 [2024-10-25 15:23:36.742255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.241 [2024-10-25 15:23:36.742286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:54.241 [2024-10-25 15:23:36.742321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:54.241 [2024-10-25 15:23:36.742391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:54.241 [2024-10-25 15:23:36.742428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:54.241 [2024-10-25 15:23:36.742532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:54.241 [2024-10-25 15:23:36.742604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:54.241 [2024-10-25 15:23:36.742633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:54.241 [2024-10-25 15:23:36.742665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:54.241 [2024-10-25 15:23:36.742694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.241 [2024-10-25 15:23:36.742726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:54.241 [2024-10-25 15:23:36.742755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:54.241 [2024-10-25 15:23:36.742788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.241 [2024-10-25 15:23:36.742866] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:54.241 [2024-10-25 15:23:36.742916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:54.241 [2024-10-25 15:23:36.742949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:54.241 [2024-10-25 15:23:36.742982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.241 [2024-10-25 15:23:36.743012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:54.241 [2024-10-25 15:23:36.743050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:54.241 [2024-10-25 15:23:36.743128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:54.241 [2024-10-25 15:23:36.743166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:54.241 [2024-10-25 15:23:36.743207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:54.241 [2024-10-25 15:23:36.743241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:54.241 [2024-10-25 15:23:36.743277] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:54.241 [2024-10-25 15:23:36.743330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:54.241 [2024-10-25 15:23:36.743425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:54.241 [2024-10-25 15:23:36.743479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:54.241 [2024-10-25 15:23:36.743527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:54.241 [2024-10-25 15:23:36.743542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:54.241 [2024-10-25 15:23:36.743553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:54.241 [2024-10-25 15:23:36.743565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:54.241 [2024-10-25 15:23:36.743575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:54.241 [2024-10-25 15:23:36.743588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:54.241 [2024-10-25 15:23:36.743598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:54.241 [2024-10-25 15:23:36.743614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:54.241 [2024-10-25 15:23:36.743624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:54.241 [2024-10-25 15:23:36.743636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:54.241 [2024-10-25 15:23:36.743646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:54.241 [2024-10-25 15:23:36.743659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:54.242 [2024-10-25 15:23:36.743669] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:54.242 [2024-10-25 15:23:36.743683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:54.242 [2024-10-25 15:23:36.743694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:54.242 [2024-10-25 15:23:36.743709] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:54.242 [2024-10-25 15:23:36.743720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:54.242 [2024-10-25 15:23:36.743732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:54.242 [2024-10-25 15:23:36.743744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.242 [2024-10-25 15:23:36.743763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:54.242 [2024-10-25 15:23:36.743775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.942 ms 00:18:54.242 [2024-10-25 15:23:36.743787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.242 [2024-10-25 15:23:36.743903] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:54.242 [2024-10-25 15:23:36.743922] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:57.568 [2024-10-25 15:23:40.003381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.568 [2024-10-25 15:23:40.003456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:57.568 [2024-10-25 15:23:40.003478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3264.768 ms 00:18:57.568 [2024-10-25 15:23:40.003493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.568 [2024-10-25 15:23:40.043017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.568 [2024-10-25 15:23:40.043076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:57.568 [2024-10-25 15:23:40.043092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.215 ms 00:18:57.568 [2024-10-25 15:23:40.043106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.568 [2024-10-25 15:23:40.043288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.568 [2024-10-25 15:23:40.043324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:57.568 [2024-10-25 15:23:40.043336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:57.568 [2024-10-25 15:23:40.043350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.568 [2024-10-25 15:23:40.102108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.568 [2024-10-25 15:23:40.102173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:57.568 [2024-10-25 15:23:40.102217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.792 ms 00:18:57.568 [2024-10-25 15:23:40.102234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.568 [2024-10-25 15:23:40.102377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.568 [2024-10-25 15:23:40.102397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:57.568 [2024-10-25 15:23:40.102411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:57.568 [2024-10-25 15:23:40.102428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.568 [2024-10-25 15:23:40.102915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.568 [2024-10-25 15:23:40.102937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:57.568 [2024-10-25 15:23:40.102951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:18:57.568 [2024-10-25 15:23:40.102967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.568 [2024-10-25 15:23:40.103116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.568 [2024-10-25 15:23:40.103133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:57.568 [2024-10-25 15:23:40.103147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:18:57.568 [2024-10-25 15:23:40.103167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.568 [2024-10-25 15:23:40.125281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.568 [2024-10-25 15:23:40.125332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:57.568 [2024-10-25 15:23:40.125348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.077 ms 00:18:57.568 [2024-10-25 15:23:40.125361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.568 [2024-10-25 15:23:40.138588] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:57.568 [2024-10-25 15:23:40.154936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.568 [2024-10-25 15:23:40.154993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:57.568 [2024-10-25 15:23:40.155012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.477 ms 00:18:57.568 [2024-10-25 15:23:40.155028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.568 [2024-10-25 15:23:40.258355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.568 [2024-10-25 15:23:40.258605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:57.568 [2024-10-25 15:23:40.258637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.378 ms 00:18:57.568 [2024-10-25 15:23:40.258651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.568 [2024-10-25 15:23:40.258876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.568 [2024-10-25 15:23:40.258889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:57.568 [2024-10-25 15:23:40.258916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:18:57.568 [2024-10-25 15:23:40.258926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.828 [2024-10-25 15:23:40.295532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.828 [2024-10-25 15:23:40.295579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:57.828 [2024-10-25 15:23:40.295601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.627 ms 00:18:57.828 [2024-10-25 15:23:40.295611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.828 [2024-10-25 15:23:40.330940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.828 [2024-10-25 15:23:40.330980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:57.828 [2024-10-25 15:23:40.330999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.309 ms 00:18:57.828 [2024-10-25 15:23:40.331009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.828 [2024-10-25 15:23:40.331720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.828 [2024-10-25 15:23:40.331742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:57.828 [2024-10-25 15:23:40.331757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:18:57.828 [2024-10-25 15:23:40.331767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.828 [2024-10-25 15:23:40.436338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.828 [2024-10-25 15:23:40.436528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:57.828 [2024-10-25 15:23:40.436561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.695 ms 00:18:57.828 [2024-10-25 15:23:40.436576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.828 [2024-10-25 15:23:40.475648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.828 [2024-10-25 15:23:40.475821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:57.828 [2024-10-25 15:23:40.475850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.963 ms 00:18:57.828 [2024-10-25 15:23:40.475861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.828 [2024-10-25 15:23:40.513939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.828 [2024-10-25 15:23:40.513999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:57.828 [2024-10-25 15:23:40.514018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.043 ms 00:18:57.828 [2024-10-25 15:23:40.514028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.828 [2024-10-25 15:23:40.551653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.828 [2024-10-25 15:23:40.551692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:57.828 [2024-10-25 15:23:40.551708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.597 ms 00:18:57.828 [2024-10-25 15:23:40.551735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.828 [2024-10-25 15:23:40.551822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.828 [2024-10-25 15:23:40.551835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:57.828 [2024-10-25 15:23:40.551852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:57.828 [2024-10-25 15:23:40.551865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.828 [2024-10-25 15:23:40.551956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.828 [2024-10-25 15:23:40.551968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:57.828 [2024-10-25 15:23:40.551981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:57.828 [2024-10-25 15:23:40.551991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.828 [2024-10-25 15:23:40.552895] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:58.090 [2024-10-25 15:23:40.557226] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL start{ 00:18:58.090 "name": "ftl0", 00:18:58.090 "uuid": "807ee30e-b752-4504-9751-1143cda47acc" 00:18:58.090 } 00:18:58.090 up', duration = 3839.150 ms, result 0 00:18:58.090 [2024-10-25 15:23:40.558224] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:58.090 15:23:40 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:58.090 15:23:40 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:18:58.090 15:23:40 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:58.090 15:23:40 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:18:58.090 15:23:40 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:58.090 15:23:40 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:58.090 15:23:40 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:58.090 15:23:40 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:58.358 [ 00:18:58.358 { 00:18:58.358 "name": "ftl0", 00:18:58.358 "aliases": [ 00:18:58.358 "807ee30e-b752-4504-9751-1143cda47acc" 00:18:58.358 ], 00:18:58.358 "product_name": "FTL disk", 00:18:58.358 "block_size": 4096, 00:18:58.358 "num_blocks": 23592960, 00:18:58.358 "uuid": "807ee30e-b752-4504-9751-1143cda47acc", 00:18:58.358 "assigned_rate_limits": { 00:18:58.358 "rw_ios_per_sec": 0, 00:18:58.358 "rw_mbytes_per_sec": 0, 00:18:58.358 "r_mbytes_per_sec": 0, 00:18:58.358 "w_mbytes_per_sec": 0 00:18:58.358 }, 00:18:58.358 "claimed": false, 00:18:58.358 "zoned": false, 00:18:58.358 "supported_io_types": { 00:18:58.358 "read": true, 00:18:58.358 "write": true, 00:18:58.358 "unmap": true, 00:18:58.358 "flush": true, 00:18:58.358 "reset": false, 00:18:58.358 "nvme_admin": false, 00:18:58.358 "nvme_io": false, 00:18:58.358 "nvme_io_md": false, 00:18:58.358 "write_zeroes": true, 00:18:58.358 "zcopy": false, 00:18:58.358 "get_zone_info": false, 00:18:58.358 "zone_management": false, 00:18:58.358 "zone_append": false, 00:18:58.358 "compare": false, 00:18:58.358 "compare_and_write": false, 00:18:58.358 "abort": false, 00:18:58.358 "seek_hole": false, 00:18:58.358 "seek_data": false, 00:18:58.358 "copy": false, 00:18:58.358 "nvme_iov_md": false 00:18:58.358 }, 00:18:58.358 "driver_specific": { 00:18:58.358 "ftl": { 00:18:58.358 "base_bdev": "d166a403-20c3-4203-9c44-587be3c4c6cd", 00:18:58.358 "cache": "nvc0n1p0" 00:18:58.358 } 00:18:58.358 } 00:18:58.358 } 00:18:58.358 ] 00:18:58.358 15:23:40 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:18:58.358 15:23:40 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:58.358 15:23:40 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:58.617 15:23:41 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:18:58.617 15:23:41 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:58.877 15:23:41 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:58.877 { 00:18:58.877 "name": "ftl0", 00:18:58.877 "aliases": [ 00:18:58.877 "807ee30e-b752-4504-9751-1143cda47acc" 00:18:58.877 ], 00:18:58.877 "product_name": "FTL disk", 00:18:58.877 "block_size": 4096, 00:18:58.877 "num_blocks": 23592960, 00:18:58.877 "uuid": "807ee30e-b752-4504-9751-1143cda47acc", 00:18:58.877 "assigned_rate_limits": { 00:18:58.877 "rw_ios_per_sec": 0, 00:18:58.877 "rw_mbytes_per_sec": 0, 00:18:58.877 "r_mbytes_per_sec": 0, 00:18:58.877 "w_mbytes_per_sec": 0 00:18:58.877 }, 00:18:58.877 "claimed": false, 00:18:58.877 "zoned": false, 00:18:58.877 "supported_io_types": { 00:18:58.877 "read": true, 00:18:58.877 "write": true, 00:18:58.877 "unmap": true, 00:18:58.877 "flush": true, 00:18:58.877 "reset": false, 00:18:58.877 "nvme_admin": false, 00:18:58.877 "nvme_io": false, 00:18:58.877 "nvme_io_md": false, 00:18:58.877 "write_zeroes": true, 00:18:58.877 "zcopy": false, 00:18:58.877 "get_zone_info": false, 00:18:58.877 "zone_management": false, 00:18:58.877 "zone_append": false, 00:18:58.877 "compare": false, 00:18:58.877 "compare_and_write": false, 00:18:58.877 "abort": false, 00:18:58.877 "seek_hole": false, 00:18:58.877 "seek_data": false, 00:18:58.877 "copy": false, 00:18:58.877 "nvme_iov_md": false 00:18:58.877 }, 00:18:58.877 "driver_specific": { 00:18:58.877 "ftl": { 00:18:58.877 "base_bdev": "d166a403-20c3-4203-9c44-587be3c4c6cd", 00:18:58.877 "cache": "nvc0n1p0" 00:18:58.877 } 00:18:58.877 } 00:18:58.877 } 00:18:58.877 ]' 00:18:58.877 15:23:41 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:58.877 15:23:41 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:18:58.877 15:23:41 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:59.138 [2024-10-25 15:23:41.617314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.138 [2024-10-25 15:23:41.617367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:59.138 [2024-10-25 15:23:41.617383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:59.138 [2024-10-25 15:23:41.617396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.138 [2024-10-25 15:23:41.617437] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:59.138 [2024-10-25 15:23:41.621536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.138 [2024-10-25 15:23:41.621567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:59.138 [2024-10-25 15:23:41.621590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.084 ms 00:18:59.138 [2024-10-25 15:23:41.621601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.138 [2024-10-25 15:23:41.622135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.138 [2024-10-25 15:23:41.622159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:59.138 [2024-10-25 15:23:41.622174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:18:59.138 [2024-10-25 15:23:41.622200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.138 [2024-10-25 15:23:41.625043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.138 [2024-10-25 15:23:41.625066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:59.138 [2024-10-25 15:23:41.625080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.798 ms 00:18:59.138 [2024-10-25 15:23:41.625093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.138 [2024-10-25 15:23:41.630743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.138 [2024-10-25 15:23:41.630774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:59.138 [2024-10-25 15:23:41.630789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.625 ms 00:18:59.138 [2024-10-25 15:23:41.630798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.138 [2024-10-25 15:23:41.668071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.138 [2024-10-25 15:23:41.668113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:59.138 [2024-10-25 15:23:41.668133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.249 ms 00:18:59.138 [2024-10-25 15:23:41.668144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.138 [2024-10-25 15:23:41.690194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.138 [2024-10-25 15:23:41.690233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:59.138 [2024-10-25 15:23:41.690250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.983 ms 00:18:59.138 [2024-10-25 15:23:41.690261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.138 [2024-10-25 15:23:41.690470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.138 [2024-10-25 15:23:41.690491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:59.138 [2024-10-25 15:23:41.690505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:18:59.138 [2024-10-25 15:23:41.690515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.138 [2024-10-25 15:23:41.727832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.138 [2024-10-25 15:23:41.727972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:59.138 [2024-10-25 15:23:41.727999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.344 ms 00:18:59.138 [2024-10-25 15:23:41.728009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.138 [2024-10-25 15:23:41.764541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.138 [2024-10-25 15:23:41.764678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:59.138 [2024-10-25 15:23:41.764821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.458 ms 00:18:59.138 [2024-10-25 15:23:41.764859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.138 [2024-10-25 15:23:41.801046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.138 [2024-10-25 15:23:41.801190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:59.138 [2024-10-25 15:23:41.801339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.105 ms 00:18:59.138 [2024-10-25 15:23:41.801377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.138 [2024-10-25 15:23:41.837326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.138 [2024-10-25 15:23:41.837469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:59.138 [2024-10-25 15:23:41.837556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.849 ms 00:18:59.138 [2024-10-25 15:23:41.837591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.138 [2024-10-25 15:23:41.837753] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:59.138 [2024-10-25 15:23:41.837808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.837921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.837973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.838025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.838116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.838245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.838336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.838393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.838479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.838535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.838620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.838675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.838770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.838826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.838916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.838975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.839073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.839132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.839233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.839333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.839388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.839509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.839668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.839758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.839809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.839863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:59.138 [2024-10-25 15:23:41.839916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.840100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.840153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.840220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.840272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.840327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.840493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.840546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.840598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.840651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.840703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.840894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.840950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:59.139 [2024-10-25 15:23:41.841853] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:59.139 [2024-10-25 15:23:41.841868] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 807ee30e-b752-4504-9751-1143cda47acc 00:18:59.139 [2024-10-25 15:23:41.841880] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:59.139 [2024-10-25 15:23:41.841892] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:59.139 [2024-10-25 15:23:41.841902] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:59.139 [2024-10-25 15:23:41.841915] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:59.139 [2024-10-25 15:23:41.841925] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:59.139 [2024-10-25 15:23:41.841937] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:59.139 [2024-10-25 15:23:41.841950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:59.139 [2024-10-25 15:23:41.841963] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:59.139 [2024-10-25 15:23:41.841972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:59.139 [2024-10-25 15:23:41.841985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.139 [2024-10-25 15:23:41.841995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:59.139 [2024-10-25 15:23:41.842008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.242 ms 00:18:59.139 [2024-10-25 15:23:41.842018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.139 [2024-10-25 15:23:41.862348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.139 [2024-10-25 15:23:41.862481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:59.139 [2024-10-25 15:23:41.862558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.307 ms 00:18:59.139 [2024-10-25 15:23:41.862594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.139 [2024-10-25 15:23:41.863200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.139 [2024-10-25 15:23:41.863250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:59.139 [2024-10-25 15:23:41.863407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:18:59.139 [2024-10-25 15:23:41.863443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.398 [2024-10-25 15:23:41.931810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.399 [2024-10-25 15:23:41.932033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:59.399 [2024-10-25 15:23:41.932115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.399 [2024-10-25 15:23:41.932155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.399 [2024-10-25 15:23:41.932343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.399 [2024-10-25 15:23:41.932452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:59.399 [2024-10-25 15:23:41.932542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.399 [2024-10-25 15:23:41.932573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.399 [2024-10-25 15:23:41.932672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.399 [2024-10-25 15:23:41.932711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:59.399 [2024-10-25 15:23:41.932748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.399 [2024-10-25 15:23:41.932779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.399 [2024-10-25 15:23:41.932842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.399 [2024-10-25 15:23:41.932929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:59.399 [2024-10-25 15:23:41.933014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.399 [2024-10-25 15:23:41.933045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.399 [2024-10-25 15:23:42.066675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.399 [2024-10-25 15:23:42.066895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:59.399 [2024-10-25 15:23:42.066987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.399 [2024-10-25 15:23:42.067024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.658 [2024-10-25 15:23:42.168839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.658 [2024-10-25 15:23:42.169056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:59.658 [2024-10-25 15:23:42.169240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.658 [2024-10-25 15:23:42.169282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.658 [2024-10-25 15:23:42.169433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.658 [2024-10-25 15:23:42.169515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:59.658 [2024-10-25 15:23:42.169579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.658 [2024-10-25 15:23:42.169611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.658 [2024-10-25 15:23:42.169693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.658 [2024-10-25 15:23:42.169736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:59.658 [2024-10-25 15:23:42.169847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.658 [2024-10-25 15:23:42.169889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.658 [2024-10-25 15:23:42.170078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.658 [2024-10-25 15:23:42.170124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:59.658 [2024-10-25 15:23:42.170240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.658 [2024-10-25 15:23:42.170282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.658 [2024-10-25 15:23:42.170383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.658 [2024-10-25 15:23:42.170465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:59.658 [2024-10-25 15:23:42.170539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.658 [2024-10-25 15:23:42.170571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.658 [2024-10-25 15:23:42.170650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.658 [2024-10-25 15:23:42.170688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:59.658 [2024-10-25 15:23:42.170733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.658 [2024-10-25 15:23:42.170816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.658 [2024-10-25 15:23:42.170951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.658 [2024-10-25 15:23:42.170995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:59.658 [2024-10-25 15:23:42.171030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.658 [2024-10-25 15:23:42.171064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.658 [2024-10-25 15:23:42.171353] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 554.920 ms, result 0 00:18:59.658 true 00:18:59.658 15:23:42 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 75366 00:18:59.658 15:23:42 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 75366 ']' 00:18:59.658 15:23:42 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 75366 00:18:59.658 15:23:42 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:18:59.658 15:23:42 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:59.658 15:23:42 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75366 00:18:59.658 killing process with pid 75366 00:18:59.658 15:23:42 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:59.658 15:23:42 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:59.658 15:23:42 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75366' 00:18:59.658 15:23:42 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 75366 00:18:59.658 15:23:42 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 75366 00:19:04.957 15:23:47 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:05.936 65536+0 records in 00:19:05.936 65536+0 records out 00:19:05.936 268435456 bytes (268 MB, 256 MiB) copied, 0.998188 s, 269 MB/s 00:19:05.936 15:23:48 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:05.936 [2024-10-25 15:23:48.413871] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:19:05.936 [2024-10-25 15:23:48.414157] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75567 ] 00:19:05.936 [2024-10-25 15:23:48.595811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.196 [2024-10-25 15:23:48.710193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.455 [2024-10-25 15:23:49.079013] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:06.456 [2024-10-25 15:23:49.079086] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:06.716 [2024-10-25 15:23:49.240892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.716 [2024-10-25 15:23:49.240959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:06.716 [2024-10-25 15:23:49.240976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:06.716 [2024-10-25 15:23:49.240987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.716 [2024-10-25 15:23:49.244199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.716 [2024-10-25 15:23:49.244239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:06.716 [2024-10-25 15:23:49.244252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.196 ms 00:19:06.716 [2024-10-25 15:23:49.244263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.716 [2024-10-25 15:23:49.244367] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:06.716 [2024-10-25 15:23:49.245466] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:06.716 [2024-10-25 15:23:49.245502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.716 [2024-10-25 15:23:49.245513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:06.716 [2024-10-25 15:23:49.245525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.147 ms 00:19:06.716 [2024-10-25 15:23:49.245534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.716 [2024-10-25 15:23:49.247061] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:06.716 [2024-10-25 15:23:49.266372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.716 [2024-10-25 15:23:49.266444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:06.716 [2024-10-25 15:23:49.266468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.342 ms 00:19:06.716 [2024-10-25 15:23:49.266478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.716 [2024-10-25 15:23:49.266604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.716 [2024-10-25 15:23:49.266619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:06.716 [2024-10-25 15:23:49.266630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:06.716 [2024-10-25 15:23:49.266640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.716 [2024-10-25 15:23:49.274251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.716 [2024-10-25 15:23:49.274309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:06.716 [2024-10-25 15:23:49.274322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.578 ms 00:19:06.716 [2024-10-25 15:23:49.274333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.716 [2024-10-25 15:23:49.274447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.716 [2024-10-25 15:23:49.274465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:06.716 [2024-10-25 15:23:49.274477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:06.716 [2024-10-25 15:23:49.274488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.716 [2024-10-25 15:23:49.274522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.716 [2024-10-25 15:23:49.274534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:06.716 [2024-10-25 15:23:49.274549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:06.716 [2024-10-25 15:23:49.274560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.716 [2024-10-25 15:23:49.274587] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:06.716 [2024-10-25 15:23:49.279566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.716 [2024-10-25 15:23:49.279730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:06.716 [2024-10-25 15:23:49.279909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.995 ms 00:19:06.716 [2024-10-25 15:23:49.279949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.716 [2024-10-25 15:23:49.280069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.716 [2024-10-25 15:23:49.280212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:06.716 [2024-10-25 15:23:49.280258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:06.717 [2024-10-25 15:23:49.280340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.717 [2024-10-25 15:23:49.280397] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:06.717 [2024-10-25 15:23:49.280442] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:06.717 [2024-10-25 15:23:49.280579] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:06.717 [2024-10-25 15:23:49.280641] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:06.717 [2024-10-25 15:23:49.280768] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:06.717 [2024-10-25 15:23:49.280840] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:06.717 [2024-10-25 15:23:49.280853] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:06.717 [2024-10-25 15:23:49.280868] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:06.717 [2024-10-25 15:23:49.280880] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:06.717 [2024-10-25 15:23:49.280896] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:06.717 [2024-10-25 15:23:49.280906] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:06.717 [2024-10-25 15:23:49.280916] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:06.717 [2024-10-25 15:23:49.280925] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:06.717 [2024-10-25 15:23:49.280936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.717 [2024-10-25 15:23:49.280946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:06.717 [2024-10-25 15:23:49.280957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:19:06.717 [2024-10-25 15:23:49.280967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.717 [2024-10-25 15:23:49.281051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.717 [2024-10-25 15:23:49.281063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:06.717 [2024-10-25 15:23:49.281073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:06.717 [2024-10-25 15:23:49.281086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.717 [2024-10-25 15:23:49.281175] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:06.717 [2024-10-25 15:23:49.281207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:06.717 [2024-10-25 15:23:49.281218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:06.717 [2024-10-25 15:23:49.281229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.717 [2024-10-25 15:23:49.281239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:06.717 [2024-10-25 15:23:49.281248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:06.717 [2024-10-25 15:23:49.281258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:06.717 [2024-10-25 15:23:49.281267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:06.717 [2024-10-25 15:23:49.281276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:06.717 [2024-10-25 15:23:49.281285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:06.717 [2024-10-25 15:23:49.281294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:06.717 [2024-10-25 15:23:49.281303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:06.717 [2024-10-25 15:23:49.281315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:06.717 [2024-10-25 15:23:49.281336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:06.717 [2024-10-25 15:23:49.281346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:06.717 [2024-10-25 15:23:49.281354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.717 [2024-10-25 15:23:49.281363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:06.717 [2024-10-25 15:23:49.281373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:06.717 [2024-10-25 15:23:49.281382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.717 [2024-10-25 15:23:49.281392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:06.717 [2024-10-25 15:23:49.281402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:06.717 [2024-10-25 15:23:49.281411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:06.717 [2024-10-25 15:23:49.281420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:06.717 [2024-10-25 15:23:49.281429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:06.717 [2024-10-25 15:23:49.281438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:06.717 [2024-10-25 15:23:49.281447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:06.717 [2024-10-25 15:23:49.281457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:06.717 [2024-10-25 15:23:49.281466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:06.717 [2024-10-25 15:23:49.281474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:06.717 [2024-10-25 15:23:49.281484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:06.717 [2024-10-25 15:23:49.281493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:06.717 [2024-10-25 15:23:49.281502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:06.717 [2024-10-25 15:23:49.281511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:06.717 [2024-10-25 15:23:49.281520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:06.717 [2024-10-25 15:23:49.281529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:06.717 [2024-10-25 15:23:49.281538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:06.717 [2024-10-25 15:23:49.281547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:06.717 [2024-10-25 15:23:49.281556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:06.717 [2024-10-25 15:23:49.281565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:06.717 [2024-10-25 15:23:49.281574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.717 [2024-10-25 15:23:49.281583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:06.717 [2024-10-25 15:23:49.281592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:06.717 [2024-10-25 15:23:49.281600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.717 [2024-10-25 15:23:49.281609] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:06.717 [2024-10-25 15:23:49.281623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:06.717 [2024-10-25 15:23:49.281632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:06.717 [2024-10-25 15:23:49.281642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.717 [2024-10-25 15:23:49.281656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:06.717 [2024-10-25 15:23:49.281665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:06.717 [2024-10-25 15:23:49.281675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:06.717 [2024-10-25 15:23:49.281684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:06.717 [2024-10-25 15:23:49.281693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:06.717 [2024-10-25 15:23:49.281702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:06.717 [2024-10-25 15:23:49.281713] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:06.717 [2024-10-25 15:23:49.281725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:06.717 [2024-10-25 15:23:49.281736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:06.717 [2024-10-25 15:23:49.281746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:06.717 [2024-10-25 15:23:49.281757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:06.717 [2024-10-25 15:23:49.281767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:06.717 [2024-10-25 15:23:49.281777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:06.718 [2024-10-25 15:23:49.281787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:06.718 [2024-10-25 15:23:49.281797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:06.718 [2024-10-25 15:23:49.281807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:06.718 [2024-10-25 15:23:49.281817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:06.718 [2024-10-25 15:23:49.281827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:06.718 [2024-10-25 15:23:49.281837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:06.718 [2024-10-25 15:23:49.281847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:06.718 [2024-10-25 15:23:49.281857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:06.718 [2024-10-25 15:23:49.281868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:06.718 [2024-10-25 15:23:49.281878] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:06.718 [2024-10-25 15:23:49.281889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:06.718 [2024-10-25 15:23:49.281900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:06.718 [2024-10-25 15:23:49.281910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:06.718 [2024-10-25 15:23:49.281920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:06.718 [2024-10-25 15:23:49.281931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:06.718 [2024-10-25 15:23:49.281942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.718 [2024-10-25 15:23:49.281954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:06.718 [2024-10-25 15:23:49.281965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.823 ms 00:19:06.718 [2024-10-25 15:23:49.281978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.718 [2024-10-25 15:23:49.322427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.718 [2024-10-25 15:23:49.322484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:06.718 [2024-10-25 15:23:49.322500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.459 ms 00:19:06.718 [2024-10-25 15:23:49.322511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.718 [2024-10-25 15:23:49.322669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.718 [2024-10-25 15:23:49.322683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:06.718 [2024-10-25 15:23:49.322700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:06.718 [2024-10-25 15:23:49.322710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.718 [2024-10-25 15:23:49.382094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.718 [2024-10-25 15:23:49.382322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:06.718 [2024-10-25 15:23:49.382348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.454 ms 00:19:06.718 [2024-10-25 15:23:49.382359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.718 [2024-10-25 15:23:49.382498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.718 [2024-10-25 15:23:49.382512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:06.718 [2024-10-25 15:23:49.382523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:06.718 [2024-10-25 15:23:49.382533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.718 [2024-10-25 15:23:49.382987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.718 [2024-10-25 15:23:49.383001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:06.718 [2024-10-25 15:23:49.383012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:19:06.718 [2024-10-25 15:23:49.383022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.718 [2024-10-25 15:23:49.383144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.718 [2024-10-25 15:23:49.383157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:06.718 [2024-10-25 15:23:49.383168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:19:06.718 [2024-10-25 15:23:49.383189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.718 [2024-10-25 15:23:49.402945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.718 [2024-10-25 15:23:49.402990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:06.718 [2024-10-25 15:23:49.403006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.763 ms 00:19:06.718 [2024-10-25 15:23:49.403017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.718 [2024-10-25 15:23:49.421831] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:06.718 [2024-10-25 15:23:49.421986] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:06.718 [2024-10-25 15:23:49.422007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.718 [2024-10-25 15:23:49.422018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:06.718 [2024-10-25 15:23:49.422031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.875 ms 00:19:06.718 [2024-10-25 15:23:49.422042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.977 [2024-10-25 15:23:49.451413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.977 [2024-10-25 15:23:49.451460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:06.977 [2024-10-25 15:23:49.451487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.330 ms 00:19:06.977 [2024-10-25 15:23:49.451498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.977 [2024-10-25 15:23:49.469730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.977 [2024-10-25 15:23:49.469788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:06.977 [2024-10-25 15:23:49.469803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.172 ms 00:19:06.977 [2024-10-25 15:23:49.469813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.977 [2024-10-25 15:23:49.488039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.977 [2024-10-25 15:23:49.488082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:06.977 [2024-10-25 15:23:49.488095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.175 ms 00:19:06.977 [2024-10-25 15:23:49.488105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.977 [2024-10-25 15:23:49.488928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.977 [2024-10-25 15:23:49.488964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:06.977 [2024-10-25 15:23:49.488977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:19:06.977 [2024-10-25 15:23:49.488987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.977 [2024-10-25 15:23:49.575328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.977 [2024-10-25 15:23:49.575401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:06.977 [2024-10-25 15:23:49.575420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.451 ms 00:19:06.977 [2024-10-25 15:23:49.575431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.977 [2024-10-25 15:23:49.586481] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:06.977 [2024-10-25 15:23:49.603050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.977 [2024-10-25 15:23:49.603094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:06.977 [2024-10-25 15:23:49.603110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.547 ms 00:19:06.977 [2024-10-25 15:23:49.603122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.977 [2024-10-25 15:23:49.603269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.977 [2024-10-25 15:23:49.603291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:06.977 [2024-10-25 15:23:49.603307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:06.977 [2024-10-25 15:23:49.603318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.977 [2024-10-25 15:23:49.603375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.977 [2024-10-25 15:23:49.603387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:06.977 [2024-10-25 15:23:49.603398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:06.977 [2024-10-25 15:23:49.603408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.977 [2024-10-25 15:23:49.603430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.977 [2024-10-25 15:23:49.603446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:06.977 [2024-10-25 15:23:49.603457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:06.977 [2024-10-25 15:23:49.603470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.977 [2024-10-25 15:23:49.603506] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:06.977 [2024-10-25 15:23:49.603518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.977 [2024-10-25 15:23:49.603528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:06.977 [2024-10-25 15:23:49.603539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:06.977 [2024-10-25 15:23:49.603549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.977 [2024-10-25 15:23:49.639487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.977 [2024-10-25 15:23:49.639529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:06.977 [2024-10-25 15:23:49.639552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.974 ms 00:19:06.977 [2024-10-25 15:23:49.639562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.977 [2024-10-25 15:23:49.639684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.977 [2024-10-25 15:23:49.639698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:06.977 [2024-10-25 15:23:49.639709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:06.977 [2024-10-25 15:23:49.639719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.977 [2024-10-25 15:23:49.640644] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:06.977 [2024-10-25 15:23:49.645082] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 400.118 ms, result 0 00:19:06.977 [2024-10-25 15:23:49.645892] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:06.978 [2024-10-25 15:23:49.664121] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:08.356  [2024-10-25T15:23:52.021Z] Copying: 26/256 [MB] (26 MBps) [2024-10-25T15:23:52.958Z] Copying: 53/256 [MB] (26 MBps) [2024-10-25T15:23:53.896Z] Copying: 79/256 [MB] (26 MBps) [2024-10-25T15:23:54.833Z] Copying: 105/256 [MB] (26 MBps) [2024-10-25T15:23:55.771Z] Copying: 132/256 [MB] (26 MBps) [2024-10-25T15:23:56.707Z] Copying: 160/256 [MB] (28 MBps) [2024-10-25T15:23:58.087Z] Copying: 186/256 [MB] (26 MBps) [2024-10-25T15:23:58.682Z] Copying: 212/256 [MB] (25 MBps) [2024-10-25T15:23:59.628Z] Copying: 238/256 [MB] (25 MBps) [2024-10-25T15:23:59.629Z] Copying: 256/256 [MB] (average 26 MBps)[2024-10-25 15:23:59.368924] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:16.901 [2024-10-25 15:23:59.383450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.901 [2024-10-25 15:23:59.383619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:16.901 [2024-10-25 15:23:59.383644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:16.901 [2024-10-25 15:23:59.383656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.901 [2024-10-25 15:23:59.383689] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:16.901 [2024-10-25 15:23:59.387947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.901 [2024-10-25 15:23:59.387979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:16.901 [2024-10-25 15:23:59.387998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.249 ms 00:19:16.901 [2024-10-25 15:23:59.388008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.901 [2024-10-25 15:23:59.389900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.901 [2024-10-25 15:23:59.389944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:16.901 [2024-10-25 15:23:59.389958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.869 ms 00:19:16.901 [2024-10-25 15:23:59.389968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.901 [2024-10-25 15:23:59.396665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.901 [2024-10-25 15:23:59.396703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:16.901 [2024-10-25 15:23:59.396716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.689 ms 00:19:16.901 [2024-10-25 15:23:59.396732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.901 [2024-10-25 15:23:59.402301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.901 [2024-10-25 15:23:59.402351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:16.901 [2024-10-25 15:23:59.402364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.542 ms 00:19:16.901 [2024-10-25 15:23:59.402374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.901 [2024-10-25 15:23:59.438824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.901 [2024-10-25 15:23:59.438864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:16.901 [2024-10-25 15:23:59.438878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.459 ms 00:19:16.901 [2024-10-25 15:23:59.438888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.901 [2024-10-25 15:23:59.459968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.901 [2024-10-25 15:23:59.460009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:16.901 [2024-10-25 15:23:59.460023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.053 ms 00:19:16.901 [2024-10-25 15:23:59.460040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.901 [2024-10-25 15:23:59.460172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.901 [2024-10-25 15:23:59.460200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:16.901 [2024-10-25 15:23:59.460211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:16.901 [2024-10-25 15:23:59.460222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.901 [2024-10-25 15:23:59.496632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.901 [2024-10-25 15:23:59.496673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:16.901 [2024-10-25 15:23:59.496687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.450 ms 00:19:16.901 [2024-10-25 15:23:59.496697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.901 [2024-10-25 15:23:59.533576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.901 [2024-10-25 15:23:59.533629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:16.901 [2024-10-25 15:23:59.533645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.856 ms 00:19:16.901 [2024-10-25 15:23:59.533655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.901 [2024-10-25 15:23:59.569844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.901 [2024-10-25 15:23:59.569888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:16.901 [2024-10-25 15:23:59.569903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.179 ms 00:19:16.901 [2024-10-25 15:23:59.569913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.901 [2024-10-25 15:23:59.605494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.901 [2024-10-25 15:23:59.605535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:16.901 [2024-10-25 15:23:59.605549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.548 ms 00:19:16.901 [2024-10-25 15:23:59.605560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.901 [2024-10-25 15:23:59.605643] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:16.901 [2024-10-25 15:23:59.605661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.605995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.606005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.606016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.606026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.606036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.606047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.606057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.606068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.606078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.606089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.606100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.606110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.606120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.606130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.606141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.606151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.606161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.606171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.606196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.606207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:16.901 [2024-10-25 15:23:59.606218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:16.902 [2024-10-25 15:23:59.606757] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:16.902 [2024-10-25 15:23:59.606771] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 807ee30e-b752-4504-9751-1143cda47acc 00:19:16.902 [2024-10-25 15:23:59.606782] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:16.902 [2024-10-25 15:23:59.606792] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:16.902 [2024-10-25 15:23:59.606801] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:16.902 [2024-10-25 15:23:59.606811] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:16.902 [2024-10-25 15:23:59.606820] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:16.902 [2024-10-25 15:23:59.606837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:16.902 [2024-10-25 15:23:59.606847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:16.902 [2024-10-25 15:23:59.606856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:16.902 [2024-10-25 15:23:59.606865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:16.902 [2024-10-25 15:23:59.606875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.902 [2024-10-25 15:23:59.606885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:16.902 [2024-10-25 15:23:59.606896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:19:16.902 [2024-10-25 15:23:59.606913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.162 [2024-10-25 15:23:59.627305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.162 [2024-10-25 15:23:59.627342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:17.162 [2024-10-25 15:23:59.627356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.399 ms 00:19:17.162 [2024-10-25 15:23:59.627366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.162 [2024-10-25 15:23:59.627986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.162 [2024-10-25 15:23:59.627998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:17.162 [2024-10-25 15:23:59.628015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:19:17.162 [2024-10-25 15:23:59.628025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.162 [2024-10-25 15:23:59.682998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.162 [2024-10-25 15:23:59.683043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:17.162 [2024-10-25 15:23:59.683057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.162 [2024-10-25 15:23:59.683067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.162 [2024-10-25 15:23:59.683153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.162 [2024-10-25 15:23:59.683166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:17.162 [2024-10-25 15:23:59.683196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.162 [2024-10-25 15:23:59.683207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.162 [2024-10-25 15:23:59.683257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.162 [2024-10-25 15:23:59.683270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:17.162 [2024-10-25 15:23:59.683281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.162 [2024-10-25 15:23:59.683291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.162 [2024-10-25 15:23:59.683325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.162 [2024-10-25 15:23:59.683336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:17.162 [2024-10-25 15:23:59.683346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.162 [2024-10-25 15:23:59.683360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.162 [2024-10-25 15:23:59.809700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.162 [2024-10-25 15:23:59.809766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:17.162 [2024-10-25 15:23:59.809782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.162 [2024-10-25 15:23:59.809792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.422 [2024-10-25 15:23:59.911585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.422 [2024-10-25 15:23:59.911792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:17.422 [2024-10-25 15:23:59.911817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.422 [2024-10-25 15:23:59.911834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.422 [2024-10-25 15:23:59.911927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.422 [2024-10-25 15:23:59.911939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:17.422 [2024-10-25 15:23:59.911950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.422 [2024-10-25 15:23:59.911960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.422 [2024-10-25 15:23:59.911990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.422 [2024-10-25 15:23:59.912001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:17.422 [2024-10-25 15:23:59.912011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.422 [2024-10-25 15:23:59.912021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.422 [2024-10-25 15:23:59.912140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.422 [2024-10-25 15:23:59.912153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:17.422 [2024-10-25 15:23:59.912164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.422 [2024-10-25 15:23:59.912174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.422 [2024-10-25 15:23:59.912234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.422 [2024-10-25 15:23:59.912247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:17.422 [2024-10-25 15:23:59.912257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.422 [2024-10-25 15:23:59.912268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.422 [2024-10-25 15:23:59.912312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.422 [2024-10-25 15:23:59.912323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:17.422 [2024-10-25 15:23:59.912333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.422 [2024-10-25 15:23:59.912342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.422 [2024-10-25 15:23:59.912386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.422 [2024-10-25 15:23:59.912399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:17.422 [2024-10-25 15:23:59.912409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.422 [2024-10-25 15:23:59.912419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.422 [2024-10-25 15:23:59.912562] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 529.955 ms, result 0 00:19:18.801 00:19:18.801 00:19:18.801 15:24:01 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=75697 00:19:18.801 15:24:01 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:18.801 15:24:01 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 75697 00:19:18.801 15:24:01 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 75697 ']' 00:19:18.801 15:24:01 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.801 15:24:01 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.801 15:24:01 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.801 15:24:01 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.801 15:24:01 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:18.801 [2024-10-25 15:24:01.257295] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:19:18.801 [2024-10-25 15:24:01.257423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75697 ] 00:19:18.801 [2024-10-25 15:24:01.434936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.060 [2024-10-25 15:24:01.551414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.005 15:24:02 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:20.005 15:24:02 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:19:20.005 15:24:02 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:20.005 [2024-10-25 15:24:02.629315] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:20.005 [2024-10-25 15:24:02.629523] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:20.274 [2024-10-25 15:24:02.811992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.274 [2024-10-25 15:24:02.812210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:20.274 [2024-10-25 15:24:02.812247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:20.274 [2024-10-25 15:24:02.812259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.274 [2024-10-25 15:24:02.816084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.274 [2024-10-25 15:24:02.816239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:20.274 [2024-10-25 15:24:02.816266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.803 ms 00:19:20.274 [2024-10-25 15:24:02.816277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.274 [2024-10-25 15:24:02.816457] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:20.274 [2024-10-25 15:24:02.817466] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:20.274 [2024-10-25 15:24:02.817497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.274 [2024-10-25 15:24:02.817508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:20.274 [2024-10-25 15:24:02.817522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.053 ms 00:19:20.274 [2024-10-25 15:24:02.817532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.274 [2024-10-25 15:24:02.818993] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:20.274 [2024-10-25 15:24:02.838296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.275 [2024-10-25 15:24:02.838459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:20.275 [2024-10-25 15:24:02.838481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.342 ms 00:19:20.275 [2024-10-25 15:24:02.838498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.275 [2024-10-25 15:24:02.838596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.275 [2024-10-25 15:24:02.838615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:20.275 [2024-10-25 15:24:02.838627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:20.275 [2024-10-25 15:24:02.838642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.275 [2024-10-25 15:24:02.845344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.275 [2024-10-25 15:24:02.845390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:20.275 [2024-10-25 15:24:02.845403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.661 ms 00:19:20.275 [2024-10-25 15:24:02.845418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.275 [2024-10-25 15:24:02.845555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.275 [2024-10-25 15:24:02.845575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:20.275 [2024-10-25 15:24:02.845587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:19:20.275 [2024-10-25 15:24:02.845602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.275 [2024-10-25 15:24:02.845629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.275 [2024-10-25 15:24:02.845652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:20.275 [2024-10-25 15:24:02.845663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:20.275 [2024-10-25 15:24:02.845678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.275 [2024-10-25 15:24:02.845703] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:20.275 [2024-10-25 15:24:02.850365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.275 [2024-10-25 15:24:02.850397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:20.275 [2024-10-25 15:24:02.850415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.670 ms 00:19:20.275 [2024-10-25 15:24:02.850425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.275 [2024-10-25 15:24:02.850501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.275 [2024-10-25 15:24:02.850513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:20.275 [2024-10-25 15:24:02.850529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:20.275 [2024-10-25 15:24:02.850539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.275 [2024-10-25 15:24:02.850566] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:20.275 [2024-10-25 15:24:02.850599] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:20.275 [2024-10-25 15:24:02.850649] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:20.275 [2024-10-25 15:24:02.850669] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:20.275 [2024-10-25 15:24:02.850764] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:20.275 [2024-10-25 15:24:02.850777] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:20.275 [2024-10-25 15:24:02.850795] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:20.275 [2024-10-25 15:24:02.850809] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:20.275 [2024-10-25 15:24:02.850832] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:20.275 [2024-10-25 15:24:02.850844] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:20.275 [2024-10-25 15:24:02.850858] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:20.275 [2024-10-25 15:24:02.850869] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:20.275 [2024-10-25 15:24:02.850888] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:20.275 [2024-10-25 15:24:02.850906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.275 [2024-10-25 15:24:02.850921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:20.275 [2024-10-25 15:24:02.850932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:19:20.275 [2024-10-25 15:24:02.850947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.275 [2024-10-25 15:24:02.851024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.275 [2024-10-25 15:24:02.851041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:20.275 [2024-10-25 15:24:02.851056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:20.275 [2024-10-25 15:24:02.851071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.275 [2024-10-25 15:24:02.851159] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:20.275 [2024-10-25 15:24:02.851197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:20.275 [2024-10-25 15:24:02.851209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:20.275 [2024-10-25 15:24:02.851225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.275 [2024-10-25 15:24:02.851236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:20.275 [2024-10-25 15:24:02.851251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:20.275 [2024-10-25 15:24:02.851260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:20.275 [2024-10-25 15:24:02.851282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:20.275 [2024-10-25 15:24:02.851292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:20.275 [2024-10-25 15:24:02.851306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:20.275 [2024-10-25 15:24:02.851316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:20.275 [2024-10-25 15:24:02.851331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:20.275 [2024-10-25 15:24:02.851340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:20.275 [2024-10-25 15:24:02.851355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:20.275 [2024-10-25 15:24:02.851365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:20.275 [2024-10-25 15:24:02.851380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.275 [2024-10-25 15:24:02.851389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:20.275 [2024-10-25 15:24:02.851403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:20.275 [2024-10-25 15:24:02.851413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.275 [2024-10-25 15:24:02.851427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:20.275 [2024-10-25 15:24:02.851448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:20.275 [2024-10-25 15:24:02.851463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.275 [2024-10-25 15:24:02.851473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:20.275 [2024-10-25 15:24:02.851491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:20.275 [2024-10-25 15:24:02.851501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.275 [2024-10-25 15:24:02.851514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:20.275 [2024-10-25 15:24:02.851524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:20.275 [2024-10-25 15:24:02.851538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.275 [2024-10-25 15:24:02.851548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:20.275 [2024-10-25 15:24:02.851563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:20.275 [2024-10-25 15:24:02.851572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:20.275 [2024-10-25 15:24:02.851587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:20.275 [2024-10-25 15:24:02.851597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:20.275 [2024-10-25 15:24:02.851611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:20.275 [2024-10-25 15:24:02.851620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:20.275 [2024-10-25 15:24:02.851635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:20.275 [2024-10-25 15:24:02.851645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:20.275 [2024-10-25 15:24:02.851659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:20.275 [2024-10-25 15:24:02.851668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:20.275 [2024-10-25 15:24:02.851686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.275 [2024-10-25 15:24:02.851696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:20.275 [2024-10-25 15:24:02.851710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:20.275 [2024-10-25 15:24:02.851720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.275 [2024-10-25 15:24:02.851736] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:20.275 [2024-10-25 15:24:02.851746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:20.275 [2024-10-25 15:24:02.851761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:20.275 [2024-10-25 15:24:02.851777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:20.275 [2024-10-25 15:24:02.851792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:20.275 [2024-10-25 15:24:02.851802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:20.275 [2024-10-25 15:24:02.851816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:20.275 [2024-10-25 15:24:02.851826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:20.275 [2024-10-25 15:24:02.851840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:20.275 [2024-10-25 15:24:02.851850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:20.275 [2024-10-25 15:24:02.851866] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:20.275 [2024-10-25 15:24:02.851879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:20.275 [2024-10-25 15:24:02.851901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:20.275 [2024-10-25 15:24:02.851912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:20.275 [2024-10-25 15:24:02.851928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:20.276 [2024-10-25 15:24:02.851939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:20.276 [2024-10-25 15:24:02.851954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:20.276 [2024-10-25 15:24:02.851965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:20.276 [2024-10-25 15:24:02.851980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:20.276 [2024-10-25 15:24:02.851990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:20.276 [2024-10-25 15:24:02.852005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:20.276 [2024-10-25 15:24:02.852016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:20.276 [2024-10-25 15:24:02.852030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:20.276 [2024-10-25 15:24:02.852041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:20.276 [2024-10-25 15:24:02.852057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:20.276 [2024-10-25 15:24:02.852067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:20.276 [2024-10-25 15:24:02.852082] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:20.276 [2024-10-25 15:24:02.852094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:20.276 [2024-10-25 15:24:02.852114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:20.276 [2024-10-25 15:24:02.852125] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:20.276 [2024-10-25 15:24:02.852141] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:20.276 [2024-10-25 15:24:02.852152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:20.276 [2024-10-25 15:24:02.852170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.276 [2024-10-25 15:24:02.852192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:20.276 [2024-10-25 15:24:02.852207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:19:20.276 [2024-10-25 15:24:02.852217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.276 [2024-10-25 15:24:02.894188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.276 [2024-10-25 15:24:02.894349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:20.276 [2024-10-25 15:24:02.894496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.964 ms 00:19:20.276 [2024-10-25 15:24:02.894538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.276 [2024-10-25 15:24:02.894744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.276 [2024-10-25 15:24:02.894794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:20.276 [2024-10-25 15:24:02.894892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:20.276 [2024-10-25 15:24:02.894941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.276 [2024-10-25 15:24:02.942097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.276 [2024-10-25 15:24:02.942276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:20.276 [2024-10-25 15:24:02.942369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.172 ms 00:19:20.276 [2024-10-25 15:24:02.942413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.276 [2024-10-25 15:24:02.942536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.276 [2024-10-25 15:24:02.942575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:20.276 [2024-10-25 15:24:02.942670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:20.276 [2024-10-25 15:24:02.942708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.276 [2024-10-25 15:24:02.943191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.276 [2024-10-25 15:24:02.943234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:20.276 [2024-10-25 15:24:02.943427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:19:20.276 [2024-10-25 15:24:02.943473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.276 [2024-10-25 15:24:02.943623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.276 [2024-10-25 15:24:02.943684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:20.276 [2024-10-25 15:24:02.943752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:19:20.276 [2024-10-25 15:24:02.943784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.276 [2024-10-25 15:24:02.965619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.276 [2024-10-25 15:24:02.965769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:20.276 [2024-10-25 15:24:02.965949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.818 ms 00:19:20.276 [2024-10-25 15:24:02.965989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.276 [2024-10-25 15:24:02.985170] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:20.276 [2024-10-25 15:24:02.985350] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:20.276 [2024-10-25 15:24:02.985492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.276 [2024-10-25 15:24:02.985526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:20.276 [2024-10-25 15:24:02.985559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.396 ms 00:19:20.276 [2024-10-25 15:24:02.985588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.535 [2024-10-25 15:24:03.014377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.535 [2024-10-25 15:24:03.014515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:20.535 [2024-10-25 15:24:03.014591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.734 ms 00:19:20.535 [2024-10-25 15:24:03.014627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.535 [2024-10-25 15:24:03.032809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.535 [2024-10-25 15:24:03.032956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:20.535 [2024-10-25 15:24:03.033098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.111 ms 00:19:20.535 [2024-10-25 15:24:03.033134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.535 [2024-10-25 15:24:03.051515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.535 [2024-10-25 15:24:03.051647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:20.535 [2024-10-25 15:24:03.051722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.306 ms 00:19:20.535 [2024-10-25 15:24:03.051757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.535 [2024-10-25 15:24:03.052613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.535 [2024-10-25 15:24:03.052735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:20.535 [2024-10-25 15:24:03.052819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:19:20.535 [2024-10-25 15:24:03.052857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.535 [2024-10-25 15:24:03.148524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.535 [2024-10-25 15:24:03.148748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:20.535 [2024-10-25 15:24:03.148784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.760 ms 00:19:20.535 [2024-10-25 15:24:03.148796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.535 [2024-10-25 15:24:03.159993] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:20.535 [2024-10-25 15:24:03.176337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.535 [2024-10-25 15:24:03.176405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:20.535 [2024-10-25 15:24:03.176423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.468 ms 00:19:20.535 [2024-10-25 15:24:03.176439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.535 [2024-10-25 15:24:03.176578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.535 [2024-10-25 15:24:03.176597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:20.535 [2024-10-25 15:24:03.176609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:20.535 [2024-10-25 15:24:03.176624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.535 [2024-10-25 15:24:03.176675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.535 [2024-10-25 15:24:03.176692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:20.535 [2024-10-25 15:24:03.176703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:20.535 [2024-10-25 15:24:03.176718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.535 [2024-10-25 15:24:03.176749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.535 [2024-10-25 15:24:03.176765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:20.535 [2024-10-25 15:24:03.176776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:20.535 [2024-10-25 15:24:03.176793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.535 [2024-10-25 15:24:03.176833] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:20.535 [2024-10-25 15:24:03.176855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.535 [2024-10-25 15:24:03.176866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:20.535 [2024-10-25 15:24:03.176881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:20.535 [2024-10-25 15:24:03.176897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.535 [2024-10-25 15:24:03.214950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.535 [2024-10-25 15:24:03.215138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:20.535 [2024-10-25 15:24:03.215171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.079 ms 00:19:20.535 [2024-10-25 15:24:03.215198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.535 [2024-10-25 15:24:03.215327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.535 [2024-10-25 15:24:03.215341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:20.535 [2024-10-25 15:24:03.215357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:20.535 [2024-10-25 15:24:03.215368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.535 [2024-10-25 15:24:03.216338] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:20.535 [2024-10-25 15:24:03.221020] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 404.658 ms, result 0 00:19:20.535 [2024-10-25 15:24:03.222387] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:20.535 Some configs were skipped because the RPC state that can call them passed over. 00:19:20.794 15:24:03 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:20.794 [2024-10-25 15:24:03.474246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.794 [2024-10-25 15:24:03.474485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:20.794 [2024-10-25 15:24:03.474573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.737 ms 00:19:20.794 [2024-10-25 15:24:03.474620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.794 [2024-10-25 15:24:03.474701] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.192 ms, result 0 00:19:20.794 true 00:19:20.794 15:24:03 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:21.052 [2024-10-25 15:24:03.685490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.052 [2024-10-25 15:24:03.685680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:21.052 [2024-10-25 15:24:03.685716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:19:21.052 [2024-10-25 15:24:03.685729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.052 [2024-10-25 15:24:03.685795] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.394 ms, result 0 00:19:21.052 true 00:19:21.052 15:24:03 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 75697 00:19:21.052 15:24:03 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 75697 ']' 00:19:21.052 15:24:03 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 75697 00:19:21.052 15:24:03 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:19:21.052 15:24:03 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:21.052 15:24:03 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75697 00:19:21.052 killing process with pid 75697 00:19:21.052 15:24:03 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:21.052 15:24:03 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:21.052 15:24:03 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75697' 00:19:21.052 15:24:03 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 75697 00:19:21.052 15:24:03 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 75697 00:19:22.432 [2024-10-25 15:24:04.871778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.432 [2024-10-25 15:24:04.871832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:22.432 [2024-10-25 15:24:04.871848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:22.432 [2024-10-25 15:24:04.871861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.432 [2024-10-25 15:24:04.871884] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:22.432 [2024-10-25 15:24:04.876199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.432 [2024-10-25 15:24:04.876225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:22.432 [2024-10-25 15:24:04.876245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.300 ms 00:19:22.432 [2024-10-25 15:24:04.876255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.432 [2024-10-25 15:24:04.876509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.432 [2024-10-25 15:24:04.876522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:22.432 [2024-10-25 15:24:04.876534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:19:22.433 [2024-10-25 15:24:04.876545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.433 [2024-10-25 15:24:04.879802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.433 [2024-10-25 15:24:04.879835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:22.433 [2024-10-25 15:24:04.879850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.239 ms 00:19:22.433 [2024-10-25 15:24:04.879863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.433 [2024-10-25 15:24:04.885572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.433 [2024-10-25 15:24:04.885704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:22.433 [2024-10-25 15:24:04.885793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.678 ms 00:19:22.433 [2024-10-25 15:24:04.885834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.433 [2024-10-25 15:24:04.900795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.433 [2024-10-25 15:24:04.900931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:22.433 [2024-10-25 15:24:04.901016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.903 ms 00:19:22.433 [2024-10-25 15:24:04.901065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.433 [2024-10-25 15:24:04.911681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.433 [2024-10-25 15:24:04.911817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:22.433 [2024-10-25 15:24:04.911906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.538 ms 00:19:22.433 [2024-10-25 15:24:04.911951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.433 [2024-10-25 15:24:04.912120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.433 [2024-10-25 15:24:04.912167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:22.433 [2024-10-25 15:24:04.912224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:22.433 [2024-10-25 15:24:04.912313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.433 [2024-10-25 15:24:04.927332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.433 [2024-10-25 15:24:04.927471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:22.433 [2024-10-25 15:24:04.927555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.983 ms 00:19:22.433 [2024-10-25 15:24:04.927594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.433 [2024-10-25 15:24:04.942295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.433 [2024-10-25 15:24:04.942425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:22.433 [2024-10-25 15:24:04.942501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.651 ms 00:19:22.433 [2024-10-25 15:24:04.942536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.433 [2024-10-25 15:24:04.957039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.433 [2024-10-25 15:24:04.957168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:22.433 [2024-10-25 15:24:04.957257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.451 ms 00:19:22.433 [2024-10-25 15:24:04.957293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.433 [2024-10-25 15:24:04.971577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.433 [2024-10-25 15:24:04.971761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:22.433 [2024-10-25 15:24:04.971873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.184 ms 00:19:22.433 [2024-10-25 15:24:04.971931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.433 [2024-10-25 15:24:04.972138] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:22.433 [2024-10-25 15:24:04.972240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.972498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.972554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.972606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.972708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.972765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.972814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.972895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:22.433 [2024-10-25 15:24:04.973838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.973851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.973862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.973874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.973885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.973898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.973909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.973926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.973937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.973950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.973961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.973974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.973985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.973998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:22.434 [2024-10-25 15:24:04.974346] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:22.434 [2024-10-25 15:24:04.974361] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 807ee30e-b752-4504-9751-1143cda47acc 00:19:22.434 [2024-10-25 15:24:04.974382] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:22.434 [2024-10-25 15:24:04.974398] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:22.434 [2024-10-25 15:24:04.974410] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:22.434 [2024-10-25 15:24:04.974423] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:22.434 [2024-10-25 15:24:04.974433] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:22.434 [2024-10-25 15:24:04.974445] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:22.434 [2024-10-25 15:24:04.974456] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:22.434 [2024-10-25 15:24:04.974467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:22.434 [2024-10-25 15:24:04.974476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:22.434 [2024-10-25 15:24:04.974489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.434 [2024-10-25 15:24:04.974501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:22.434 [2024-10-25 15:24:04.974515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.363 ms 00:19:22.434 [2024-10-25 15:24:04.974525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.434 [2024-10-25 15:24:04.994561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.434 [2024-10-25 15:24:04.994698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:22.434 [2024-10-25 15:24:04.994726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.037 ms 00:19:22.434 [2024-10-25 15:24:04.994737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.434 [2024-10-25 15:24:04.995393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.434 [2024-10-25 15:24:04.995413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:22.434 [2024-10-25 15:24:04.995427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:19:22.434 [2024-10-25 15:24:04.995437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.434 [2024-10-25 15:24:05.064118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.434 [2024-10-25 15:24:05.064200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:22.434 [2024-10-25 15:24:05.064218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.434 [2024-10-25 15:24:05.064230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.434 [2024-10-25 15:24:05.064342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.434 [2024-10-25 15:24:05.064355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:22.434 [2024-10-25 15:24:05.064369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.434 [2024-10-25 15:24:05.064379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.434 [2024-10-25 15:24:05.064437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.434 [2024-10-25 15:24:05.064450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:22.434 [2024-10-25 15:24:05.064466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.434 [2024-10-25 15:24:05.064476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.434 [2024-10-25 15:24:05.064498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.434 [2024-10-25 15:24:05.064508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:22.434 [2024-10-25 15:24:05.064521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.434 [2024-10-25 15:24:05.064531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.694 [2024-10-25 15:24:05.189238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.694 [2024-10-25 15:24:05.189314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:22.694 [2024-10-25 15:24:05.189349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.694 [2024-10-25 15:24:05.189359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.694 [2024-10-25 15:24:05.290267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.694 [2024-10-25 15:24:05.290330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:22.694 [2024-10-25 15:24:05.290349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.694 [2024-10-25 15:24:05.290359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.694 [2024-10-25 15:24:05.290470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.694 [2024-10-25 15:24:05.290486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.694 [2024-10-25 15:24:05.290503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.694 [2024-10-25 15:24:05.290513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.694 [2024-10-25 15:24:05.290545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.694 [2024-10-25 15:24:05.290556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.695 [2024-10-25 15:24:05.290568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.695 [2024-10-25 15:24:05.290578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.695 [2024-10-25 15:24:05.290691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.695 [2024-10-25 15:24:05.290704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.695 [2024-10-25 15:24:05.290720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.695 [2024-10-25 15:24:05.290731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.695 [2024-10-25 15:24:05.290772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.695 [2024-10-25 15:24:05.290784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:22.695 [2024-10-25 15:24:05.290797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.695 [2024-10-25 15:24:05.290807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.695 [2024-10-25 15:24:05.290849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.695 [2024-10-25 15:24:05.290860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.695 [2024-10-25 15:24:05.290878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.695 [2024-10-25 15:24:05.290888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.695 [2024-10-25 15:24:05.290947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.695 [2024-10-25 15:24:05.290959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.695 [2024-10-25 15:24:05.290972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.695 [2024-10-25 15:24:05.290982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.695 [2024-10-25 15:24:05.291117] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 419.995 ms, result 0 00:19:23.632 15:24:06 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:23.632 15:24:06 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:23.891 [2024-10-25 15:24:06.394816] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:19:23.891 [2024-10-25 15:24:06.394965] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75769 ] 00:19:23.891 [2024-10-25 15:24:06.576991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.150 [2024-10-25 15:24:06.691005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.410 [2024-10-25 15:24:07.049995] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:24.410 [2024-10-25 15:24:07.050058] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:24.671 [2024-10-25 15:24:07.211542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.671 [2024-10-25 15:24:07.211594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:24.671 [2024-10-25 15:24:07.211610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:24.671 [2024-10-25 15:24:07.211621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.671 [2024-10-25 15:24:07.214783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.671 [2024-10-25 15:24:07.214823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:24.671 [2024-10-25 15:24:07.214836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.146 ms 00:19:24.671 [2024-10-25 15:24:07.214847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.671 [2024-10-25 15:24:07.214945] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:24.671 [2024-10-25 15:24:07.216015] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:24.671 [2024-10-25 15:24:07.216048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.671 [2024-10-25 15:24:07.216059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:24.671 [2024-10-25 15:24:07.216071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.112 ms 00:19:24.671 [2024-10-25 15:24:07.216081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.671 [2024-10-25 15:24:07.217553] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:24.671 [2024-10-25 15:24:07.236472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.671 [2024-10-25 15:24:07.236512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:24.671 [2024-10-25 15:24:07.236532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.951 ms 00:19:24.671 [2024-10-25 15:24:07.236542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.671 [2024-10-25 15:24:07.236647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.671 [2024-10-25 15:24:07.236661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:24.671 [2024-10-25 15:24:07.236673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:24.671 [2024-10-25 15:24:07.236683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.671 [2024-10-25 15:24:07.243392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.671 [2024-10-25 15:24:07.243547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:24.671 [2024-10-25 15:24:07.243568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.680 ms 00:19:24.671 [2024-10-25 15:24:07.243578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.671 [2024-10-25 15:24:07.243683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.671 [2024-10-25 15:24:07.243698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:24.671 [2024-10-25 15:24:07.243708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:24.671 [2024-10-25 15:24:07.243718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.671 [2024-10-25 15:24:07.243748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.671 [2024-10-25 15:24:07.243759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:24.671 [2024-10-25 15:24:07.243773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:24.671 [2024-10-25 15:24:07.243783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.671 [2024-10-25 15:24:07.243806] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:24.671 [2024-10-25 15:24:07.248616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.671 [2024-10-25 15:24:07.248650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:24.671 [2024-10-25 15:24:07.248661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.824 ms 00:19:24.671 [2024-10-25 15:24:07.248687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.671 [2024-10-25 15:24:07.248755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.671 [2024-10-25 15:24:07.248768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:24.671 [2024-10-25 15:24:07.248779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:24.671 [2024-10-25 15:24:07.248789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.671 [2024-10-25 15:24:07.248808] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:24.671 [2024-10-25 15:24:07.248830] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:24.671 [2024-10-25 15:24:07.248867] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:24.671 [2024-10-25 15:24:07.248885] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:24.671 [2024-10-25 15:24:07.248974] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:24.671 [2024-10-25 15:24:07.248986] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:24.671 [2024-10-25 15:24:07.248999] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:24.671 [2024-10-25 15:24:07.249011] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:24.671 [2024-10-25 15:24:07.249024] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:24.671 [2024-10-25 15:24:07.249037] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:24.671 [2024-10-25 15:24:07.249047] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:24.671 [2024-10-25 15:24:07.249057] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:24.671 [2024-10-25 15:24:07.249067] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:24.671 [2024-10-25 15:24:07.249077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.671 [2024-10-25 15:24:07.249087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:24.671 [2024-10-25 15:24:07.249097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:19:24.671 [2024-10-25 15:24:07.249108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.671 [2024-10-25 15:24:07.249184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.671 [2024-10-25 15:24:07.249216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:24.671 [2024-10-25 15:24:07.249227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:24.671 [2024-10-25 15:24:07.249240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.671 [2024-10-25 15:24:07.249329] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:24.671 [2024-10-25 15:24:07.249341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:24.671 [2024-10-25 15:24:07.249352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:24.671 [2024-10-25 15:24:07.249362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.672 [2024-10-25 15:24:07.249373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:24.672 [2024-10-25 15:24:07.249382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:24.672 [2024-10-25 15:24:07.249392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:24.672 [2024-10-25 15:24:07.249402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:24.672 [2024-10-25 15:24:07.249411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:24.672 [2024-10-25 15:24:07.249420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:24.672 [2024-10-25 15:24:07.249430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:24.672 [2024-10-25 15:24:07.249439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:24.672 [2024-10-25 15:24:07.249448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:24.672 [2024-10-25 15:24:07.249468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:24.672 [2024-10-25 15:24:07.249478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:24.672 [2024-10-25 15:24:07.249487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.672 [2024-10-25 15:24:07.249496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:24.672 [2024-10-25 15:24:07.249506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:24.672 [2024-10-25 15:24:07.249516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.672 [2024-10-25 15:24:07.249525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:24.672 [2024-10-25 15:24:07.249535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:24.672 [2024-10-25 15:24:07.249544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.672 [2024-10-25 15:24:07.249553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:24.672 [2024-10-25 15:24:07.249562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:24.672 [2024-10-25 15:24:07.249571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.672 [2024-10-25 15:24:07.249580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:24.672 [2024-10-25 15:24:07.249589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:24.672 [2024-10-25 15:24:07.249598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.672 [2024-10-25 15:24:07.249607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:24.672 [2024-10-25 15:24:07.249616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:24.672 [2024-10-25 15:24:07.249625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.672 [2024-10-25 15:24:07.249634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:24.672 [2024-10-25 15:24:07.249642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:24.672 [2024-10-25 15:24:07.249651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:24.672 [2024-10-25 15:24:07.249660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:24.672 [2024-10-25 15:24:07.249669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:24.672 [2024-10-25 15:24:07.249678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:24.672 [2024-10-25 15:24:07.249687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:24.672 [2024-10-25 15:24:07.249695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:24.672 [2024-10-25 15:24:07.249704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.672 [2024-10-25 15:24:07.249713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:24.672 [2024-10-25 15:24:07.249722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:24.672 [2024-10-25 15:24:07.249732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.672 [2024-10-25 15:24:07.249740] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:24.672 [2024-10-25 15:24:07.249750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:24.672 [2024-10-25 15:24:07.249760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:24.672 [2024-10-25 15:24:07.249769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.672 [2024-10-25 15:24:07.249783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:24.672 [2024-10-25 15:24:07.249793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:24.672 [2024-10-25 15:24:07.249802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:24.672 [2024-10-25 15:24:07.249813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:24.672 [2024-10-25 15:24:07.249822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:24.672 [2024-10-25 15:24:07.249832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:24.672 [2024-10-25 15:24:07.249842] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:24.672 [2024-10-25 15:24:07.249854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:24.672 [2024-10-25 15:24:07.249865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:24.672 [2024-10-25 15:24:07.249875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:24.672 [2024-10-25 15:24:07.249886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:24.672 [2024-10-25 15:24:07.249896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:24.672 [2024-10-25 15:24:07.249907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:24.672 [2024-10-25 15:24:07.249917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:24.672 [2024-10-25 15:24:07.249927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:24.672 [2024-10-25 15:24:07.249938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:24.672 [2024-10-25 15:24:07.249948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:24.672 [2024-10-25 15:24:07.249958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:24.672 [2024-10-25 15:24:07.249968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:24.672 [2024-10-25 15:24:07.249978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:24.672 [2024-10-25 15:24:07.249988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:24.672 [2024-10-25 15:24:07.249999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:24.672 [2024-10-25 15:24:07.250009] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:24.672 [2024-10-25 15:24:07.250020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:24.672 [2024-10-25 15:24:07.250031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:24.672 [2024-10-25 15:24:07.250042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:24.672 [2024-10-25 15:24:07.250052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:24.672 [2024-10-25 15:24:07.250062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:24.672 [2024-10-25 15:24:07.250072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.672 [2024-10-25 15:24:07.250082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:24.672 [2024-10-25 15:24:07.250093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:19:24.672 [2024-10-25 15:24:07.250106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.672 [2024-10-25 15:24:07.288949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.672 [2024-10-25 15:24:07.289126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:24.672 [2024-10-25 15:24:07.289229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.851 ms 00:19:24.672 [2024-10-25 15:24:07.289269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.672 [2024-10-25 15:24:07.289428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.672 [2024-10-25 15:24:07.289574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:24.672 [2024-10-25 15:24:07.289670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:24.672 [2024-10-25 15:24:07.289700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.672 [2024-10-25 15:24:07.346179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.673 [2024-10-25 15:24:07.346372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:24.673 [2024-10-25 15:24:07.346519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.522 ms 00:19:24.673 [2024-10-25 15:24:07.346558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.673 [2024-10-25 15:24:07.346703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.673 [2024-10-25 15:24:07.346807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:24.673 [2024-10-25 15:24:07.346885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:24.673 [2024-10-25 15:24:07.346925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.673 [2024-10-25 15:24:07.347393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.673 [2024-10-25 15:24:07.347498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:24.673 [2024-10-25 15:24:07.347593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:19:24.673 [2024-10-25 15:24:07.347630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.673 [2024-10-25 15:24:07.347781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.673 [2024-10-25 15:24:07.347818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:24.673 [2024-10-25 15:24:07.347914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:19:24.673 [2024-10-25 15:24:07.347950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.673 [2024-10-25 15:24:07.367067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.673 [2024-10-25 15:24:07.367228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:24.673 [2024-10-25 15:24:07.367304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.098 ms 00:19:24.673 [2024-10-25 15:24:07.367341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.673 [2024-10-25 15:24:07.386572] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:24.673 [2024-10-25 15:24:07.386749] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:24.673 [2024-10-25 15:24:07.386848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.673 [2024-10-25 15:24:07.386882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:24.673 [2024-10-25 15:24:07.386925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.399 ms 00:19:24.673 [2024-10-25 15:24:07.386955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.932 [2024-10-25 15:24:07.416503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.932 [2024-10-25 15:24:07.416671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:24.932 [2024-10-25 15:24:07.416808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.469 ms 00:19:24.932 [2024-10-25 15:24:07.416845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.932 [2024-10-25 15:24:07.435255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.932 [2024-10-25 15:24:07.435390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:24.932 [2024-10-25 15:24:07.435470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.339 ms 00:19:24.932 [2024-10-25 15:24:07.435506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.932 [2024-10-25 15:24:07.453616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.932 [2024-10-25 15:24:07.453762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:24.932 [2024-10-25 15:24:07.453895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.997 ms 00:19:24.932 [2024-10-25 15:24:07.453912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.933 [2024-10-25 15:24:07.454735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.933 [2024-10-25 15:24:07.454759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:24.933 [2024-10-25 15:24:07.454772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:19:24.933 [2024-10-25 15:24:07.454782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.933 [2024-10-25 15:24:07.540547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.933 [2024-10-25 15:24:07.540785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:24.933 [2024-10-25 15:24:07.540810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.875 ms 00:19:24.933 [2024-10-25 15:24:07.540821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.933 [2024-10-25 15:24:07.551694] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:24.933 [2024-10-25 15:24:07.567650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.933 [2024-10-25 15:24:07.567697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:24.933 [2024-10-25 15:24:07.567712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.780 ms 00:19:24.933 [2024-10-25 15:24:07.567739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.933 [2024-10-25 15:24:07.567863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.933 [2024-10-25 15:24:07.567881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:24.933 [2024-10-25 15:24:07.567892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:24.933 [2024-10-25 15:24:07.567903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.933 [2024-10-25 15:24:07.567957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.933 [2024-10-25 15:24:07.567969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:24.933 [2024-10-25 15:24:07.567979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:24.933 [2024-10-25 15:24:07.567989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.933 [2024-10-25 15:24:07.568016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.933 [2024-10-25 15:24:07.568028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:24.933 [2024-10-25 15:24:07.568041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:24.933 [2024-10-25 15:24:07.568050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.933 [2024-10-25 15:24:07.568086] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:24.933 [2024-10-25 15:24:07.568098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.933 [2024-10-25 15:24:07.568108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:24.933 [2024-10-25 15:24:07.568128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:24.933 [2024-10-25 15:24:07.568137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.933 [2024-10-25 15:24:07.604643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.933 [2024-10-25 15:24:07.604688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:24.933 [2024-10-25 15:24:07.604702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.543 ms 00:19:24.933 [2024-10-25 15:24:07.604729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.933 [2024-10-25 15:24:07.604843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.933 [2024-10-25 15:24:07.604856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:24.933 [2024-10-25 15:24:07.604867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:24.933 [2024-10-25 15:24:07.604877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.933 [2024-10-25 15:24:07.605774] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:24.933 [2024-10-25 15:24:07.610005] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 394.580 ms, result 0 00:19:24.933 [2024-10-25 15:24:07.610936] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:24.933 [2024-10-25 15:24:07.629039] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:26.311  [2024-10-25T15:24:09.975Z] Copying: 29/256 [MB] (29 MBps) [2024-10-25T15:24:10.910Z] Copying: 56/256 [MB] (26 MBps) [2024-10-25T15:24:11.847Z] Copying: 82/256 [MB] (26 MBps) [2024-10-25T15:24:12.785Z] Copying: 110/256 [MB] (27 MBps) [2024-10-25T15:24:13.721Z] Copying: 139/256 [MB] (28 MBps) [2024-10-25T15:24:14.658Z] Copying: 167/256 [MB] (28 MBps) [2024-10-25T15:24:16.063Z] Copying: 196/256 [MB] (28 MBps) [2024-10-25T15:24:16.631Z] Copying: 224/256 [MB] (28 MBps) [2024-10-25T15:24:16.890Z] Copying: 252/256 [MB] (28 MBps) [2024-10-25T15:24:16.890Z] Copying: 256/256 [MB] (average 28 MBps)[2024-10-25 15:24:16.732308] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:34.162 [2024-10-25 15:24:16.746980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.162 [2024-10-25 15:24:16.747023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:34.162 [2024-10-25 15:24:16.747039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:34.162 [2024-10-25 15:24:16.747050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.162 [2024-10-25 15:24:16.747073] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:34.162 [2024-10-25 15:24:16.751234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.162 [2024-10-25 15:24:16.751275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:34.162 [2024-10-25 15:24:16.751287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.153 ms 00:19:34.162 [2024-10-25 15:24:16.751297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.162 [2024-10-25 15:24:16.751535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.162 [2024-10-25 15:24:16.751549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:34.162 [2024-10-25 15:24:16.751560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:19:34.162 [2024-10-25 15:24:16.751569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.162 [2024-10-25 15:24:16.754451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.162 [2024-10-25 15:24:16.754586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:34.162 [2024-10-25 15:24:16.754617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.871 ms 00:19:34.162 [2024-10-25 15:24:16.754628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.162 [2024-10-25 15:24:16.760280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.162 [2024-10-25 15:24:16.760314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:34.162 [2024-10-25 15:24:16.760325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.620 ms 00:19:34.162 [2024-10-25 15:24:16.760336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.162 [2024-10-25 15:24:16.796718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.162 [2024-10-25 15:24:16.796862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:34.162 [2024-10-25 15:24:16.796883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.370 ms 00:19:34.162 [2024-10-25 15:24:16.796893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.162 [2024-10-25 15:24:16.818198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.162 [2024-10-25 15:24:16.818240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:34.162 [2024-10-25 15:24:16.818264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.235 ms 00:19:34.162 [2024-10-25 15:24:16.818274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.162 [2024-10-25 15:24:16.818415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.162 [2024-10-25 15:24:16.818429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:34.162 [2024-10-25 15:24:16.818440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:34.162 [2024-10-25 15:24:16.818450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.162 [2024-10-25 15:24:16.855939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.162 [2024-10-25 15:24:16.855995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:34.162 [2024-10-25 15:24:16.856010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.514 ms 00:19:34.162 [2024-10-25 15:24:16.856020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.422 [2024-10-25 15:24:16.892713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.422 [2024-10-25 15:24:16.892766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:34.422 [2024-10-25 15:24:16.892781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.662 ms 00:19:34.422 [2024-10-25 15:24:16.892791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.422 [2024-10-25 15:24:16.928323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.422 [2024-10-25 15:24:16.928384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:34.423 [2024-10-25 15:24:16.928399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.526 ms 00:19:34.423 [2024-10-25 15:24:16.928409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.423 [2024-10-25 15:24:16.964792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.423 [2024-10-25 15:24:16.964831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:34.423 [2024-10-25 15:24:16.964845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.351 ms 00:19:34.423 [2024-10-25 15:24:16.964855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.423 [2024-10-25 15:24:16.964939] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:34.423 [2024-10-25 15:24:16.964962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.964975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.964987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.964998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:34.423 [2024-10-25 15:24:16.965853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:34.424 [2024-10-25 15:24:16.965864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:34.424 [2024-10-25 15:24:16.965874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:34.424 [2024-10-25 15:24:16.965884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:34.424 [2024-10-25 15:24:16.965894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:34.424 [2024-10-25 15:24:16.965905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:34.424 [2024-10-25 15:24:16.965915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:34.424 [2024-10-25 15:24:16.965925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:34.424 [2024-10-25 15:24:16.965935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:34.424 [2024-10-25 15:24:16.965946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:34.424 [2024-10-25 15:24:16.965956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:34.424 [2024-10-25 15:24:16.965966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:34.424 [2024-10-25 15:24:16.965978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:34.424 [2024-10-25 15:24:16.966004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:34.424 [2024-10-25 15:24:16.966015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:34.424 [2024-10-25 15:24:16.966025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:34.424 [2024-10-25 15:24:16.966036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:34.424 [2024-10-25 15:24:16.966047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:34.424 [2024-10-25 15:24:16.966064] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:34.424 [2024-10-25 15:24:16.966075] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 807ee30e-b752-4504-9751-1143cda47acc 00:19:34.424 [2024-10-25 15:24:16.966086] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:34.424 [2024-10-25 15:24:16.966095] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:34.424 [2024-10-25 15:24:16.966105] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:34.424 [2024-10-25 15:24:16.966115] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:34.424 [2024-10-25 15:24:16.966125] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:34.424 [2024-10-25 15:24:16.966135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:34.424 [2024-10-25 15:24:16.966145] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:34.424 [2024-10-25 15:24:16.966154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:34.424 [2024-10-25 15:24:16.966163] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:34.424 [2024-10-25 15:24:16.966173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.424 [2024-10-25 15:24:16.966193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:34.424 [2024-10-25 15:24:16.966204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:19:34.424 [2024-10-25 15:24:16.966218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.424 [2024-10-25 15:24:16.986438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.424 [2024-10-25 15:24:16.986582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:34.424 [2024-10-25 15:24:16.986602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.230 ms 00:19:34.424 [2024-10-25 15:24:16.986612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.424 [2024-10-25 15:24:16.987118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.424 [2024-10-25 15:24:16.987138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:34.424 [2024-10-25 15:24:16.987149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:19:34.424 [2024-10-25 15:24:16.987159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.424 [2024-10-25 15:24:17.044105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.424 [2024-10-25 15:24:17.044145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:34.424 [2024-10-25 15:24:17.044159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.424 [2024-10-25 15:24:17.044169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.424 [2024-10-25 15:24:17.044266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.424 [2024-10-25 15:24:17.044282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:34.424 [2024-10-25 15:24:17.044293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.424 [2024-10-25 15:24:17.044302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.424 [2024-10-25 15:24:17.044350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.424 [2024-10-25 15:24:17.044363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:34.424 [2024-10-25 15:24:17.044374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.424 [2024-10-25 15:24:17.044385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.424 [2024-10-25 15:24:17.044403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.424 [2024-10-25 15:24:17.044414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:34.424 [2024-10-25 15:24:17.044427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.424 [2024-10-25 15:24:17.044437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.683 [2024-10-25 15:24:17.171690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.683 [2024-10-25 15:24:17.171753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:34.683 [2024-10-25 15:24:17.171768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.683 [2024-10-25 15:24:17.171778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.683 [2024-10-25 15:24:17.274762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.683 [2024-10-25 15:24:17.274823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:34.683 [2024-10-25 15:24:17.274844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.683 [2024-10-25 15:24:17.274855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.683 [2024-10-25 15:24:17.274953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.683 [2024-10-25 15:24:17.274965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:34.683 [2024-10-25 15:24:17.274976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.683 [2024-10-25 15:24:17.274986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.683 [2024-10-25 15:24:17.275014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.683 [2024-10-25 15:24:17.275025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:34.683 [2024-10-25 15:24:17.275035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.683 [2024-10-25 15:24:17.275045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.683 [2024-10-25 15:24:17.275166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.683 [2024-10-25 15:24:17.275205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:34.683 [2024-10-25 15:24:17.275216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.683 [2024-10-25 15:24:17.275226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.683 [2024-10-25 15:24:17.275266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.683 [2024-10-25 15:24:17.275279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:34.683 [2024-10-25 15:24:17.275289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.683 [2024-10-25 15:24:17.275299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.683 [2024-10-25 15:24:17.275343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.683 [2024-10-25 15:24:17.275354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:34.683 [2024-10-25 15:24:17.275364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.683 [2024-10-25 15:24:17.275374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.683 [2024-10-25 15:24:17.275418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.683 [2024-10-25 15:24:17.275431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:34.683 [2024-10-25 15:24:17.275440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.683 [2024-10-25 15:24:17.275450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.683 [2024-10-25 15:24:17.275590] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 529.461 ms, result 0 00:19:35.619 00:19:35.619 00:19:35.619 15:24:18 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:35.903 15:24:18 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:36.163 15:24:18 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:36.163 [2024-10-25 15:24:18.868168] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:19:36.163 [2024-10-25 15:24:18.868306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75902 ] 00:19:36.422 [2024-10-25 15:24:19.050328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.682 [2024-10-25 15:24:19.158121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.941 [2024-10-25 15:24:19.479732] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:36.941 [2024-10-25 15:24:19.479805] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:36.941 [2024-10-25 15:24:19.640918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.941 [2024-10-25 15:24:19.641153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:36.941 [2024-10-25 15:24:19.641194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:36.941 [2024-10-25 15:24:19.641206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.941 [2024-10-25 15:24:19.644408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.941 [2024-10-25 15:24:19.644452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:36.941 [2024-10-25 15:24:19.644466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.176 ms 00:19:36.941 [2024-10-25 15:24:19.644476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.942 [2024-10-25 15:24:19.644587] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:36.942 [2024-10-25 15:24:19.645629] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:36.942 [2024-10-25 15:24:19.645665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.942 [2024-10-25 15:24:19.645676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:36.942 [2024-10-25 15:24:19.645699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:19:36.942 [2024-10-25 15:24:19.645709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.942 [2024-10-25 15:24:19.647223] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:36.942 [2024-10-25 15:24:19.667138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.942 [2024-10-25 15:24:19.667202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:36.942 [2024-10-25 15:24:19.667223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.949 ms 00:19:36.942 [2024-10-25 15:24:19.667234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.942 [2024-10-25 15:24:19.667338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.942 [2024-10-25 15:24:19.667353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:36.942 [2024-10-25 15:24:19.667366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:36.942 [2024-10-25 15:24:19.667376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.203 [2024-10-25 15:24:19.674063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.203 [2024-10-25 15:24:19.674096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:37.203 [2024-10-25 15:24:19.674107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.654 ms 00:19:37.203 [2024-10-25 15:24:19.674117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.203 [2024-10-25 15:24:19.674243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.203 [2024-10-25 15:24:19.674258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:37.203 [2024-10-25 15:24:19.674269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:19:37.203 [2024-10-25 15:24:19.674280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.203 [2024-10-25 15:24:19.674310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.203 [2024-10-25 15:24:19.674336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:37.203 [2024-10-25 15:24:19.674350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:37.203 [2024-10-25 15:24:19.674361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.203 [2024-10-25 15:24:19.674386] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:37.203 [2024-10-25 15:24:19.678926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.203 [2024-10-25 15:24:19.678974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:37.203 [2024-10-25 15:24:19.678986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.554 ms 00:19:37.203 [2024-10-25 15:24:19.678996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.203 [2024-10-25 15:24:19.679066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.203 [2024-10-25 15:24:19.679079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:37.203 [2024-10-25 15:24:19.679090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:37.203 [2024-10-25 15:24:19.679100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.203 [2024-10-25 15:24:19.679123] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:37.203 [2024-10-25 15:24:19.679145] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:37.203 [2024-10-25 15:24:19.679202] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:37.203 [2024-10-25 15:24:19.679221] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:37.203 [2024-10-25 15:24:19.679310] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:37.203 [2024-10-25 15:24:19.679323] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:37.203 [2024-10-25 15:24:19.679338] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:37.203 [2024-10-25 15:24:19.679351] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:37.203 [2024-10-25 15:24:19.679363] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:37.203 [2024-10-25 15:24:19.679378] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:37.203 [2024-10-25 15:24:19.679388] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:37.203 [2024-10-25 15:24:19.679398] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:37.203 [2024-10-25 15:24:19.679408] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:37.203 [2024-10-25 15:24:19.679419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.203 [2024-10-25 15:24:19.679429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:37.203 [2024-10-25 15:24:19.679440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:19:37.203 [2024-10-25 15:24:19.679450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.203 [2024-10-25 15:24:19.679525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.203 [2024-10-25 15:24:19.679536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:37.203 [2024-10-25 15:24:19.679547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:37.203 [2024-10-25 15:24:19.679560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.203 [2024-10-25 15:24:19.679648] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:37.203 [2024-10-25 15:24:19.679660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:37.203 [2024-10-25 15:24:19.679670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:37.203 [2024-10-25 15:24:19.679681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.203 [2024-10-25 15:24:19.679692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:37.203 [2024-10-25 15:24:19.679701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:37.203 [2024-10-25 15:24:19.679711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:37.203 [2024-10-25 15:24:19.679720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:37.203 [2024-10-25 15:24:19.679731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:37.203 [2024-10-25 15:24:19.679741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:37.203 [2024-10-25 15:24:19.679750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:37.203 [2024-10-25 15:24:19.679760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:37.203 [2024-10-25 15:24:19.679769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:37.203 [2024-10-25 15:24:19.679790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:37.203 [2024-10-25 15:24:19.679799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:37.203 [2024-10-25 15:24:19.679809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.204 [2024-10-25 15:24:19.679818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:37.204 [2024-10-25 15:24:19.679828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:37.204 [2024-10-25 15:24:19.679837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.204 [2024-10-25 15:24:19.679846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:37.204 [2024-10-25 15:24:19.679855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:37.204 [2024-10-25 15:24:19.679865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.204 [2024-10-25 15:24:19.679874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:37.204 [2024-10-25 15:24:19.679883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:37.204 [2024-10-25 15:24:19.679892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.204 [2024-10-25 15:24:19.679901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:37.204 [2024-10-25 15:24:19.679911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:37.204 [2024-10-25 15:24:19.679920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.204 [2024-10-25 15:24:19.679929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:37.204 [2024-10-25 15:24:19.679938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:37.204 [2024-10-25 15:24:19.679947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.204 [2024-10-25 15:24:19.679956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:37.204 [2024-10-25 15:24:19.679965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:37.204 [2024-10-25 15:24:19.679974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:37.204 [2024-10-25 15:24:19.679983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:37.204 [2024-10-25 15:24:19.679992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:37.204 [2024-10-25 15:24:19.680001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:37.204 [2024-10-25 15:24:19.680010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:37.204 [2024-10-25 15:24:19.680020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:37.204 [2024-10-25 15:24:19.680029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.204 [2024-10-25 15:24:19.680039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:37.204 [2024-10-25 15:24:19.680049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:37.204 [2024-10-25 15:24:19.680058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.204 [2024-10-25 15:24:19.680067] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:37.204 [2024-10-25 15:24:19.680076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:37.204 [2024-10-25 15:24:19.680086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:37.204 [2024-10-25 15:24:19.680096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.204 [2024-10-25 15:24:19.680110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:37.204 [2024-10-25 15:24:19.680120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:37.204 [2024-10-25 15:24:19.680130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:37.204 [2024-10-25 15:24:19.680139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:37.204 [2024-10-25 15:24:19.680148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:37.204 [2024-10-25 15:24:19.680157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:37.204 [2024-10-25 15:24:19.680168] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:37.204 [2024-10-25 15:24:19.680192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:37.204 [2024-10-25 15:24:19.680204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:37.204 [2024-10-25 15:24:19.680214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:37.204 [2024-10-25 15:24:19.680224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:37.204 [2024-10-25 15:24:19.680234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:37.204 [2024-10-25 15:24:19.680244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:37.204 [2024-10-25 15:24:19.680254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:37.204 [2024-10-25 15:24:19.680265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:37.204 [2024-10-25 15:24:19.680275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:37.204 [2024-10-25 15:24:19.680285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:37.204 [2024-10-25 15:24:19.680295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:37.204 [2024-10-25 15:24:19.680305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:37.204 [2024-10-25 15:24:19.680315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:37.204 [2024-10-25 15:24:19.680325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:37.204 [2024-10-25 15:24:19.680335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:37.204 [2024-10-25 15:24:19.680345] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:37.204 [2024-10-25 15:24:19.680356] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:37.204 [2024-10-25 15:24:19.680367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:37.204 [2024-10-25 15:24:19.680378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:37.204 [2024-10-25 15:24:19.680390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:37.204 [2024-10-25 15:24:19.680400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:37.204 [2024-10-25 15:24:19.680410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.204 [2024-10-25 15:24:19.680421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:37.204 [2024-10-25 15:24:19.680431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:19:37.204 [2024-10-25 15:24:19.680450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.204 [2024-10-25 15:24:19.720300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.204 [2024-10-25 15:24:19.720483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:37.204 [2024-10-25 15:24:19.720508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.858 ms 00:19:37.204 [2024-10-25 15:24:19.720519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.204 [2024-10-25 15:24:19.720654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.204 [2024-10-25 15:24:19.720666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:37.204 [2024-10-25 15:24:19.720684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:37.204 [2024-10-25 15:24:19.720694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.204 [2024-10-25 15:24:19.782633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.204 [2024-10-25 15:24:19.782670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:37.204 [2024-10-25 15:24:19.782685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.014 ms 00:19:37.204 [2024-10-25 15:24:19.782696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.204 [2024-10-25 15:24:19.782818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.204 [2024-10-25 15:24:19.782831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:37.204 [2024-10-25 15:24:19.782843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:37.204 [2024-10-25 15:24:19.782853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.204 [2024-10-25 15:24:19.783324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.204 [2024-10-25 15:24:19.783339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:37.204 [2024-10-25 15:24:19.783350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:19:37.204 [2024-10-25 15:24:19.783360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.204 [2024-10-25 15:24:19.783485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.204 [2024-10-25 15:24:19.783499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:37.204 [2024-10-25 15:24:19.783509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:19:37.204 [2024-10-25 15:24:19.783519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.204 [2024-10-25 15:24:19.804496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.204 [2024-10-25 15:24:19.804530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:37.204 [2024-10-25 15:24:19.804544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.987 ms 00:19:37.204 [2024-10-25 15:24:19.804555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.204 [2024-10-25 15:24:19.824683] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:37.204 [2024-10-25 15:24:19.824719] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:37.204 [2024-10-25 15:24:19.824735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.204 [2024-10-25 15:24:19.824746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:37.204 [2024-10-25 15:24:19.824758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.077 ms 00:19:37.204 [2024-10-25 15:24:19.824768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.204 [2024-10-25 15:24:19.854884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.204 [2024-10-25 15:24:19.855047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:37.204 [2024-10-25 15:24:19.855069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.076 ms 00:19:37.204 [2024-10-25 15:24:19.855080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.204 [2024-10-25 15:24:19.873893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.204 [2024-10-25 15:24:19.874026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:37.205 [2024-10-25 15:24:19.874047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.691 ms 00:19:37.205 [2024-10-25 15:24:19.874058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.205 [2024-10-25 15:24:19.892650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.205 [2024-10-25 15:24:19.892787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:37.205 [2024-10-25 15:24:19.892808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.542 ms 00:19:37.205 [2024-10-25 15:24:19.892819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.205 [2024-10-25 15:24:19.893609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.205 [2024-10-25 15:24:19.893634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:37.205 [2024-10-25 15:24:19.893647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:19:37.205 [2024-10-25 15:24:19.893658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.464 [2024-10-25 15:24:19.980179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.464 [2024-10-25 15:24:19.980253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:37.464 [2024-10-25 15:24:19.980270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.632 ms 00:19:37.464 [2024-10-25 15:24:19.980281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.464 [2024-10-25 15:24:19.991599] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:37.464 [2024-10-25 15:24:20.008654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.464 [2024-10-25 15:24:20.008729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:37.464 [2024-10-25 15:24:20.008747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.298 ms 00:19:37.464 [2024-10-25 15:24:20.008759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.464 [2024-10-25 15:24:20.008922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.464 [2024-10-25 15:24:20.008942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:37.464 [2024-10-25 15:24:20.008954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:37.464 [2024-10-25 15:24:20.008966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.464 [2024-10-25 15:24:20.009023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.464 [2024-10-25 15:24:20.009036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:37.464 [2024-10-25 15:24:20.009047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:37.464 [2024-10-25 15:24:20.009058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.464 [2024-10-25 15:24:20.009086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.464 [2024-10-25 15:24:20.009098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:37.464 [2024-10-25 15:24:20.009113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:37.464 [2024-10-25 15:24:20.009123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.465 [2024-10-25 15:24:20.009161] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:37.465 [2024-10-25 15:24:20.009173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.465 [2024-10-25 15:24:20.009185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:37.465 [2024-10-25 15:24:20.009215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:37.465 [2024-10-25 15:24:20.009226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.465 [2024-10-25 15:24:20.047751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.465 [2024-10-25 15:24:20.047946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:37.465 [2024-10-25 15:24:20.047971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.563 ms 00:19:37.465 [2024-10-25 15:24:20.047983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.465 [2024-10-25 15:24:20.048109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.465 [2024-10-25 15:24:20.048124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:37.465 [2024-10-25 15:24:20.048136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:37.465 [2024-10-25 15:24:20.048146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.465 [2024-10-25 15:24:20.049067] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:37.465 [2024-10-25 15:24:20.053709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 408.503 ms, result 0 00:19:37.465 [2024-10-25 15:24:20.054666] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:37.465 [2024-10-25 15:24:20.073591] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:37.725  [2024-10-25T15:24:20.453Z] Copying: 4096/4096 [kB] (average 28 MBps)[2024-10-25 15:24:20.220276] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:37.725 [2024-10-25 15:24:20.235369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.725 [2024-10-25 15:24:20.235417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:37.725 [2024-10-25 15:24:20.235432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:37.725 [2024-10-25 15:24:20.235443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.725 [2024-10-25 15:24:20.235466] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:37.725 [2024-10-25 15:24:20.239800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.725 [2024-10-25 15:24:20.239964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:37.725 [2024-10-25 15:24:20.239985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.324 ms 00:19:37.725 [2024-10-25 15:24:20.239996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.725 [2024-10-25 15:24:20.241806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.725 [2024-10-25 15:24:20.241844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:37.725 [2024-10-25 15:24:20.241857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.781 ms 00:19:37.725 [2024-10-25 15:24:20.241867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.725 [2024-10-25 15:24:20.245053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.725 [2024-10-25 15:24:20.245090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:37.725 [2024-10-25 15:24:20.245108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.173 ms 00:19:37.725 [2024-10-25 15:24:20.245119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.725 [2024-10-25 15:24:20.250757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.725 [2024-10-25 15:24:20.250893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:37.725 [2024-10-25 15:24:20.250920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.613 ms 00:19:37.725 [2024-10-25 15:24:20.250930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.725 [2024-10-25 15:24:20.287531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.725 [2024-10-25 15:24:20.287574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:37.725 [2024-10-25 15:24:20.287589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.590 ms 00:19:37.725 [2024-10-25 15:24:20.287599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.725 [2024-10-25 15:24:20.309346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.725 [2024-10-25 15:24:20.309500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:37.725 [2024-10-25 15:24:20.309530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.727 ms 00:19:37.725 [2024-10-25 15:24:20.309544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.725 [2024-10-25 15:24:20.309735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.725 [2024-10-25 15:24:20.309749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:37.725 [2024-10-25 15:24:20.309761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:37.725 [2024-10-25 15:24:20.309771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.725 [2024-10-25 15:24:20.347469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.725 [2024-10-25 15:24:20.347509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:37.725 [2024-10-25 15:24:20.347523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.728 ms 00:19:37.725 [2024-10-25 15:24:20.347533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.725 [2024-10-25 15:24:20.384772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.725 [2024-10-25 15:24:20.384813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:37.725 [2024-10-25 15:24:20.384827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.242 ms 00:19:37.725 [2024-10-25 15:24:20.384838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.725 [2024-10-25 15:24:20.420153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.725 [2024-10-25 15:24:20.420322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:37.725 [2024-10-25 15:24:20.420345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.318 ms 00:19:37.725 [2024-10-25 15:24:20.420355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.986 [2024-10-25 15:24:20.455872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.986 [2024-10-25 15:24:20.456023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:37.986 [2024-10-25 15:24:20.456043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.445 ms 00:19:37.986 [2024-10-25 15:24:20.456055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.986 [2024-10-25 15:24:20.456109] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:37.986 [2024-10-25 15:24:20.456132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:37.987 [2024-10-25 15:24:20.456961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.456971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.456981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.456992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:37.988 [2024-10-25 15:24:20.457258] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:37.988 [2024-10-25 15:24:20.457267] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 807ee30e-b752-4504-9751-1143cda47acc 00:19:37.988 [2024-10-25 15:24:20.457278] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:37.988 [2024-10-25 15:24:20.457288] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:37.988 [2024-10-25 15:24:20.457298] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:37.988 [2024-10-25 15:24:20.457308] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:37.988 [2024-10-25 15:24:20.457318] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:37.988 [2024-10-25 15:24:20.457328] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:37.988 [2024-10-25 15:24:20.457339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:37.988 [2024-10-25 15:24:20.457347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:37.988 [2024-10-25 15:24:20.457356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:37.988 [2024-10-25 15:24:20.457365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.988 [2024-10-25 15:24:20.457375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:37.988 [2024-10-25 15:24:20.457390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.259 ms 00:19:37.988 [2024-10-25 15:24:20.457400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.988 [2024-10-25 15:24:20.477137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.988 [2024-10-25 15:24:20.477196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:37.988 [2024-10-25 15:24:20.477212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.747 ms 00:19:37.988 [2024-10-25 15:24:20.477222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.988 [2024-10-25 15:24:20.477780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.988 [2024-10-25 15:24:20.477804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:37.988 [2024-10-25 15:24:20.477815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:19:37.988 [2024-10-25 15:24:20.477825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.988 [2024-10-25 15:24:20.532143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.988 [2024-10-25 15:24:20.532221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:37.988 [2024-10-25 15:24:20.532237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.988 [2024-10-25 15:24:20.532248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.988 [2024-10-25 15:24:20.532378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.988 [2024-10-25 15:24:20.532396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:37.988 [2024-10-25 15:24:20.532407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.988 [2024-10-25 15:24:20.532417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.988 [2024-10-25 15:24:20.532478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.988 [2024-10-25 15:24:20.532491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:37.988 [2024-10-25 15:24:20.532502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.988 [2024-10-25 15:24:20.532512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.988 [2024-10-25 15:24:20.532531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.988 [2024-10-25 15:24:20.532541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:37.988 [2024-10-25 15:24:20.532556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.988 [2024-10-25 15:24:20.532566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.988 [2024-10-25 15:24:20.658641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.988 [2024-10-25 15:24:20.658892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:37.988 [2024-10-25 15:24:20.658926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.988 [2024-10-25 15:24:20.658937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.246 [2024-10-25 15:24:20.763265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.246 [2024-10-25 15:24:20.763331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:38.246 [2024-10-25 15:24:20.763352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.246 [2024-10-25 15:24:20.763362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.246 [2024-10-25 15:24:20.763456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.246 [2024-10-25 15:24:20.763468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.246 [2024-10-25 15:24:20.763478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.246 [2024-10-25 15:24:20.763489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.246 [2024-10-25 15:24:20.763518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.246 [2024-10-25 15:24:20.763530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.246 [2024-10-25 15:24:20.763540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.246 [2024-10-25 15:24:20.763554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.246 [2024-10-25 15:24:20.763670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.246 [2024-10-25 15:24:20.763683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.246 [2024-10-25 15:24:20.763694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.246 [2024-10-25 15:24:20.763704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.246 [2024-10-25 15:24:20.763740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.246 [2024-10-25 15:24:20.763752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:38.246 [2024-10-25 15:24:20.763762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.246 [2024-10-25 15:24:20.763772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.246 [2024-10-25 15:24:20.763814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.246 [2024-10-25 15:24:20.763826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.246 [2024-10-25 15:24:20.763835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.246 [2024-10-25 15:24:20.763845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.246 [2024-10-25 15:24:20.763886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.246 [2024-10-25 15:24:20.763898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.246 [2024-10-25 15:24:20.763909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.246 [2024-10-25 15:24:20.763922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.246 [2024-10-25 15:24:20.764053] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 529.535 ms, result 0 00:19:39.182 00:19:39.182 00:19:39.182 15:24:21 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:39.182 15:24:21 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=75933 00:19:39.182 15:24:21 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 75933 00:19:39.182 15:24:21 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 75933 ']' 00:19:39.182 15:24:21 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.182 15:24:21 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:39.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.182 15:24:21 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.182 15:24:21 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:39.182 15:24:21 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:39.182 [2024-10-25 15:24:21.908229] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:19:39.182 [2024-10-25 15:24:21.908358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75933 ] 00:19:39.441 [2024-10-25 15:24:22.088624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.700 [2024-10-25 15:24:22.203588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.637 15:24:23 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:40.637 15:24:23 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:19:40.637 15:24:23 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:40.637 [2024-10-25 15:24:23.334164] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:40.637 [2024-10-25 15:24:23.334238] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:40.897 [2024-10-25 15:24:23.521802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.897 [2024-10-25 15:24:23.522022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:40.897 [2024-10-25 15:24:23.522056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:40.897 [2024-10-25 15:24:23.522067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.897 [2024-10-25 15:24:23.525889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.897 [2024-10-25 15:24:23.525932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:40.897 [2024-10-25 15:24:23.525948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.796 ms 00:19:40.897 [2024-10-25 15:24:23.525967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.897 [2024-10-25 15:24:23.526155] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:40.897 [2024-10-25 15:24:23.527125] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:40.897 [2024-10-25 15:24:23.527163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.897 [2024-10-25 15:24:23.527189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:40.897 [2024-10-25 15:24:23.527203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:19:40.897 [2024-10-25 15:24:23.527214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.897 [2024-10-25 15:24:23.528749] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:40.897 [2024-10-25 15:24:23.548278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.897 [2024-10-25 15:24:23.548336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:40.897 [2024-10-25 15:24:23.548352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.568 ms 00:19:40.897 [2024-10-25 15:24:23.548383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.897 [2024-10-25 15:24:23.548497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.897 [2024-10-25 15:24:23.548516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:40.897 [2024-10-25 15:24:23.548528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:40.897 [2024-10-25 15:24:23.548544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.897 [2024-10-25 15:24:23.556008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.897 [2024-10-25 15:24:23.556249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:40.897 [2024-10-25 15:24:23.556273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.421 ms 00:19:40.897 [2024-10-25 15:24:23.556290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.897 [2024-10-25 15:24:23.556450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.897 [2024-10-25 15:24:23.556470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:40.897 [2024-10-25 15:24:23.556481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:19:40.897 [2024-10-25 15:24:23.556497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.897 [2024-10-25 15:24:23.556534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.897 [2024-10-25 15:24:23.556550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:40.897 [2024-10-25 15:24:23.556561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:40.897 [2024-10-25 15:24:23.556575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.897 [2024-10-25 15:24:23.556604] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:40.897 [2024-10-25 15:24:23.561386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.897 [2024-10-25 15:24:23.561418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:40.897 [2024-10-25 15:24:23.561436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.786 ms 00:19:40.897 [2024-10-25 15:24:23.561446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.897 [2024-10-25 15:24:23.561527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.897 [2024-10-25 15:24:23.561540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:40.897 [2024-10-25 15:24:23.561556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:40.897 [2024-10-25 15:24:23.561566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.897 [2024-10-25 15:24:23.561601] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:40.897 [2024-10-25 15:24:23.561624] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:40.897 [2024-10-25 15:24:23.561676] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:40.897 [2024-10-25 15:24:23.561697] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:40.897 [2024-10-25 15:24:23.561793] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:40.897 [2024-10-25 15:24:23.561807] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:40.897 [2024-10-25 15:24:23.561827] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:40.897 [2024-10-25 15:24:23.561845] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:40.897 [2024-10-25 15:24:23.561862] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:40.897 [2024-10-25 15:24:23.561873] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:40.897 [2024-10-25 15:24:23.561889] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:40.897 [2024-10-25 15:24:23.561899] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:40.897 [2024-10-25 15:24:23.561918] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:40.897 [2024-10-25 15:24:23.561929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.897 [2024-10-25 15:24:23.561944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:40.897 [2024-10-25 15:24:23.561955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:19:40.897 [2024-10-25 15:24:23.561969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.897 [2024-10-25 15:24:23.562045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.897 [2024-10-25 15:24:23.562067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:40.897 [2024-10-25 15:24:23.562077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:40.898 [2024-10-25 15:24:23.562092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.898 [2024-10-25 15:24:23.562206] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:40.898 [2024-10-25 15:24:23.562225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:40.898 [2024-10-25 15:24:23.562237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:40.898 [2024-10-25 15:24:23.562252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.898 [2024-10-25 15:24:23.562263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:40.898 [2024-10-25 15:24:23.562277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:40.898 [2024-10-25 15:24:23.562287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:40.898 [2024-10-25 15:24:23.562309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:40.898 [2024-10-25 15:24:23.562320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:40.898 [2024-10-25 15:24:23.562334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:40.898 [2024-10-25 15:24:23.562353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:40.898 [2024-10-25 15:24:23.562367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:40.898 [2024-10-25 15:24:23.562378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:40.898 [2024-10-25 15:24:23.562393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:40.898 [2024-10-25 15:24:23.562403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:40.898 [2024-10-25 15:24:23.562418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.898 [2024-10-25 15:24:23.562427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:40.898 [2024-10-25 15:24:23.562441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:40.898 [2024-10-25 15:24:23.562451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.898 [2024-10-25 15:24:23.562465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:40.898 [2024-10-25 15:24:23.562486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:40.898 [2024-10-25 15:24:23.562501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:40.898 [2024-10-25 15:24:23.562510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:40.898 [2024-10-25 15:24:23.562529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:40.898 [2024-10-25 15:24:23.562539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:40.898 [2024-10-25 15:24:23.562553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:40.898 [2024-10-25 15:24:23.562562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:40.898 [2024-10-25 15:24:23.562576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:40.898 [2024-10-25 15:24:23.562586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:40.898 [2024-10-25 15:24:23.562600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:40.898 [2024-10-25 15:24:23.562609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:40.898 [2024-10-25 15:24:23.562623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:40.898 [2024-10-25 15:24:23.562632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:40.898 [2024-10-25 15:24:23.562647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:40.898 [2024-10-25 15:24:23.562657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:40.898 [2024-10-25 15:24:23.562671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:40.898 [2024-10-25 15:24:23.562680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:40.898 [2024-10-25 15:24:23.562694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:40.898 [2024-10-25 15:24:23.562703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:40.898 [2024-10-25 15:24:23.562722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.898 [2024-10-25 15:24:23.562731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:40.898 [2024-10-25 15:24:23.562746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:40.898 [2024-10-25 15:24:23.562755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.898 [2024-10-25 15:24:23.562768] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:40.898 [2024-10-25 15:24:23.562780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:40.898 [2024-10-25 15:24:23.562799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:40.898 [2024-10-25 15:24:23.562809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.898 [2024-10-25 15:24:23.562824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:40.898 [2024-10-25 15:24:23.562834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:40.898 [2024-10-25 15:24:23.562848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:40.898 [2024-10-25 15:24:23.562858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:40.898 [2024-10-25 15:24:23.562872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:40.898 [2024-10-25 15:24:23.562882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:40.898 [2024-10-25 15:24:23.562898] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:40.898 [2024-10-25 15:24:23.562920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:40.898 [2024-10-25 15:24:23.562941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:40.898 [2024-10-25 15:24:23.562952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:40.898 [2024-10-25 15:24:23.562969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:40.898 [2024-10-25 15:24:23.562980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:40.898 [2024-10-25 15:24:23.562995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:40.898 [2024-10-25 15:24:23.563005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:40.898 [2024-10-25 15:24:23.563021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:40.898 [2024-10-25 15:24:23.563031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:40.898 [2024-10-25 15:24:23.563046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:40.898 [2024-10-25 15:24:23.563056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:40.898 [2024-10-25 15:24:23.563071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:40.898 [2024-10-25 15:24:23.563082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:40.898 [2024-10-25 15:24:23.563097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:40.898 [2024-10-25 15:24:23.563108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:40.898 [2024-10-25 15:24:23.563123] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:40.898 [2024-10-25 15:24:23.563134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:40.898 [2024-10-25 15:24:23.563155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:40.898 [2024-10-25 15:24:23.563165] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:40.898 [2024-10-25 15:24:23.563190] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:40.898 [2024-10-25 15:24:23.563202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:40.898 [2024-10-25 15:24:23.563218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.898 [2024-10-25 15:24:23.563230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:40.898 [2024-10-25 15:24:23.563245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:19:40.898 [2024-10-25 15:24:23.563255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.898 [2024-10-25 15:24:23.605241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.898 [2024-10-25 15:24:23.605300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:40.898 [2024-10-25 15:24:23.605323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.974 ms 00:19:40.898 [2024-10-25 15:24:23.605334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.898 [2024-10-25 15:24:23.605508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.898 [2024-10-25 15:24:23.605522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:40.898 [2024-10-25 15:24:23.605538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:40.898 [2024-10-25 15:24:23.605549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.156 [2024-10-25 15:24:23.654361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.156 [2024-10-25 15:24:23.654640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:41.156 [2024-10-25 15:24:23.654684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.856 ms 00:19:41.156 [2024-10-25 15:24:23.654696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.157 [2024-10-25 15:24:23.654832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.157 [2024-10-25 15:24:23.654845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:41.157 [2024-10-25 15:24:23.654861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:41.157 [2024-10-25 15:24:23.654872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.157 [2024-10-25 15:24:23.655361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.157 [2024-10-25 15:24:23.655377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:41.157 [2024-10-25 15:24:23.655398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:19:41.157 [2024-10-25 15:24:23.655408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.157 [2024-10-25 15:24:23.655536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.157 [2024-10-25 15:24:23.655550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:41.157 [2024-10-25 15:24:23.655565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:19:41.157 [2024-10-25 15:24:23.655576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.157 [2024-10-25 15:24:23.678348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.157 [2024-10-25 15:24:23.678404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:41.157 [2024-10-25 15:24:23.678426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.776 ms 00:19:41.157 [2024-10-25 15:24:23.678438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.157 [2024-10-25 15:24:23.698160] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:41.157 [2024-10-25 15:24:23.698223] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:41.157 [2024-10-25 15:24:23.698246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.157 [2024-10-25 15:24:23.698258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:41.157 [2024-10-25 15:24:23.698276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.685 ms 00:19:41.157 [2024-10-25 15:24:23.698286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.157 [2024-10-25 15:24:23.728663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.157 [2024-10-25 15:24:23.728714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:41.157 [2024-10-25 15:24:23.728734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.308 ms 00:19:41.157 [2024-10-25 15:24:23.728745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.157 [2024-10-25 15:24:23.746725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.157 [2024-10-25 15:24:23.746776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:41.157 [2024-10-25 15:24:23.746802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.909 ms 00:19:41.157 [2024-10-25 15:24:23.746812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.157 [2024-10-25 15:24:23.764907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.157 [2024-10-25 15:24:23.764955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:41.157 [2024-10-25 15:24:23.764975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.026 ms 00:19:41.157 [2024-10-25 15:24:23.765001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.157 [2024-10-25 15:24:23.765857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.157 [2024-10-25 15:24:23.765891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:41.157 [2024-10-25 15:24:23.765907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:19:41.157 [2024-10-25 15:24:23.765917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.157 [2024-10-25 15:24:23.866142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.157 [2024-10-25 15:24:23.866366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:41.157 [2024-10-25 15:24:23.866402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.347 ms 00:19:41.157 [2024-10-25 15:24:23.866413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.157 [2024-10-25 15:24:23.877370] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:41.415 [2024-10-25 15:24:23.893420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.415 [2024-10-25 15:24:23.893489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:41.415 [2024-10-25 15:24:23.893507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.925 ms 00:19:41.415 [2024-10-25 15:24:23.893529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.415 [2024-10-25 15:24:23.893639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.415 [2024-10-25 15:24:23.893658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:41.415 [2024-10-25 15:24:23.893670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:41.415 [2024-10-25 15:24:23.893685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.415 [2024-10-25 15:24:23.893738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.415 [2024-10-25 15:24:23.893754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:41.415 [2024-10-25 15:24:23.893765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:41.415 [2024-10-25 15:24:23.893781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.415 [2024-10-25 15:24:23.893812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.415 [2024-10-25 15:24:23.893828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:41.415 [2024-10-25 15:24:23.893839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:41.415 [2024-10-25 15:24:23.893854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.415 [2024-10-25 15:24:23.893895] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:41.415 [2024-10-25 15:24:23.893917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.415 [2024-10-25 15:24:23.893928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:41.415 [2024-10-25 15:24:23.893949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:41.415 [2024-10-25 15:24:23.893960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.415 [2024-10-25 15:24:23.930022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.415 [2024-10-25 15:24:23.930065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:41.415 [2024-10-25 15:24:23.930086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.086 ms 00:19:41.415 [2024-10-25 15:24:23.930096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.415 [2024-10-25 15:24:23.930247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.415 [2024-10-25 15:24:23.930262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:41.415 [2024-10-25 15:24:23.930278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:41.415 [2024-10-25 15:24:23.930289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.415 [2024-10-25 15:24:23.931379] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:41.415 [2024-10-25 15:24:23.935937] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 409.870 ms, result 0 00:19:41.415 [2024-10-25 15:24:23.937232] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:41.415 Some configs were skipped because the RPC state that can call them passed over. 00:19:41.415 15:24:23 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:41.674 [2024-10-25 15:24:24.180814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.674 [2024-10-25 15:24:24.181074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:41.674 [2024-10-25 15:24:24.181215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.517 ms 00:19:41.674 [2024-10-25 15:24:24.181269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.674 [2024-10-25 15:24:24.181372] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.071 ms, result 0 00:19:41.674 true 00:19:41.674 15:24:24 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:41.674 [2024-10-25 15:24:24.392413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.674 [2024-10-25 15:24:24.392601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:41.674 [2024-10-25 15:24:24.392713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.287 ms 00:19:41.674 [2024-10-25 15:24:24.392821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.674 [2024-10-25 15:24:24.392898] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.773 ms, result 0 00:19:41.674 true 00:19:41.958 15:24:24 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 75933 00:19:41.958 15:24:24 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 75933 ']' 00:19:41.958 15:24:24 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 75933 00:19:41.958 15:24:24 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:19:41.958 15:24:24 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:41.958 15:24:24 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75933 00:19:41.958 killing process with pid 75933 00:19:41.958 15:24:24 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:41.958 15:24:24 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:41.958 15:24:24 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75933' 00:19:41.958 15:24:24 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 75933 00:19:41.958 15:24:24 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 75933 00:19:42.895 [2024-10-25 15:24:25.553692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.895 [2024-10-25 15:24:25.553740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:42.895 [2024-10-25 15:24:25.553756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:42.895 [2024-10-25 15:24:25.553769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.895 [2024-10-25 15:24:25.553793] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:42.895 [2024-10-25 15:24:25.557971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.895 [2024-10-25 15:24:25.558003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:42.895 [2024-10-25 15:24:25.558021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.163 ms 00:19:42.895 [2024-10-25 15:24:25.558031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.895 [2024-10-25 15:24:25.558294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.895 [2024-10-25 15:24:25.558308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:42.895 [2024-10-25 15:24:25.558320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:19:42.895 [2024-10-25 15:24:25.558331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.895 [2024-10-25 15:24:25.561497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.895 [2024-10-25 15:24:25.561525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:42.895 [2024-10-25 15:24:25.561539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.147 ms 00:19:42.895 [2024-10-25 15:24:25.561551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.895 [2024-10-25 15:24:25.567225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.895 [2024-10-25 15:24:25.567255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:42.895 [2024-10-25 15:24:25.567273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.645 ms 00:19:42.895 [2024-10-25 15:24:25.567283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.895 [2024-10-25 15:24:25.582726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.895 [2024-10-25 15:24:25.582757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:42.895 [2024-10-25 15:24:25.582777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.410 ms 00:19:42.895 [2024-10-25 15:24:25.582798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.895 [2024-10-25 15:24:25.593702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.895 [2024-10-25 15:24:25.593735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:42.895 [2024-10-25 15:24:25.593754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.864 ms 00:19:42.895 [2024-10-25 15:24:25.593765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.895 [2024-10-25 15:24:25.593894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.895 [2024-10-25 15:24:25.593910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:42.895 [2024-10-25 15:24:25.593924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:42.895 [2024-10-25 15:24:25.593934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.895 [2024-10-25 15:24:25.609289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.895 [2024-10-25 15:24:25.609437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:42.895 [2024-10-25 15:24:25.609473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.355 ms 00:19:42.895 [2024-10-25 15:24:25.609484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.156 [2024-10-25 15:24:25.624569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.156 [2024-10-25 15:24:25.624710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:43.156 [2024-10-25 15:24:25.624745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.049 ms 00:19:43.157 [2024-10-25 15:24:25.624756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.157 [2024-10-25 15:24:25.639104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.157 [2024-10-25 15:24:25.639135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:43.157 [2024-10-25 15:24:25.639153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.309 ms 00:19:43.157 [2024-10-25 15:24:25.639163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.157 [2024-10-25 15:24:25.653852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.157 [2024-10-25 15:24:25.653885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:43.157 [2024-10-25 15:24:25.653903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.615 ms 00:19:43.157 [2024-10-25 15:24:25.653914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.157 [2024-10-25 15:24:25.653970] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:43.157 [2024-10-25 15:24:25.653989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.654997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.655013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.655025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.655040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.655051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.655067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.655078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.655093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.655105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.655120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.655131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.655147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.655157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.655172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:43.157 [2024-10-25 15:24:25.655191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:43.158 [2024-10-25 15:24:25.655212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:43.158 [2024-10-25 15:24:25.655223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:43.158 [2024-10-25 15:24:25.655239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:43.158 [2024-10-25 15:24:25.655250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:43.158 [2024-10-25 15:24:25.655266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:43.158 [2024-10-25 15:24:25.655277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:43.158 [2024-10-25 15:24:25.655294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:43.158 [2024-10-25 15:24:25.655305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:43.158 [2024-10-25 15:24:25.655321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:43.158 [2024-10-25 15:24:25.655332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:43.158 [2024-10-25 15:24:25.655349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:43.158 [2024-10-25 15:24:25.655360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:43.158 [2024-10-25 15:24:25.655375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:43.158 [2024-10-25 15:24:25.655386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:43.158 [2024-10-25 15:24:25.655401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:43.158 [2024-10-25 15:24:25.655419] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:43.158 [2024-10-25 15:24:25.655439] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 807ee30e-b752-4504-9751-1143cda47acc 00:19:43.158 [2024-10-25 15:24:25.655469] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:43.158 [2024-10-25 15:24:25.655490] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:43.158 [2024-10-25 15:24:25.655500] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:43.158 [2024-10-25 15:24:25.655515] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:43.158 [2024-10-25 15:24:25.655525] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:43.158 [2024-10-25 15:24:25.655541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:43.158 [2024-10-25 15:24:25.655551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:43.158 [2024-10-25 15:24:25.655565] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:43.158 [2024-10-25 15:24:25.655575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:43.158 [2024-10-25 15:24:25.655590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.158 [2024-10-25 15:24:25.655601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:43.158 [2024-10-25 15:24:25.655617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.625 ms 00:19:43.158 [2024-10-25 15:24:25.655628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.158 [2024-10-25 15:24:25.675901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.158 [2024-10-25 15:24:25.675937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:43.158 [2024-10-25 15:24:25.675961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.274 ms 00:19:43.158 [2024-10-25 15:24:25.675971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.158 [2024-10-25 15:24:25.676571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.158 [2024-10-25 15:24:25.676593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:43.158 [2024-10-25 15:24:25.676609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:19:43.158 [2024-10-25 15:24:25.676626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.158 [2024-10-25 15:24:25.746795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.158 [2024-10-25 15:24:25.746837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:43.158 [2024-10-25 15:24:25.746856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.158 [2024-10-25 15:24:25.746867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.158 [2024-10-25 15:24:25.746974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.158 [2024-10-25 15:24:25.746987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:43.158 [2024-10-25 15:24:25.747003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.158 [2024-10-25 15:24:25.747019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.158 [2024-10-25 15:24:25.747075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.158 [2024-10-25 15:24:25.747088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:43.158 [2024-10-25 15:24:25.747108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.158 [2024-10-25 15:24:25.747119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.158 [2024-10-25 15:24:25.747143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.158 [2024-10-25 15:24:25.747154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:43.158 [2024-10-25 15:24:25.747169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.158 [2024-10-25 15:24:25.747201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.158 [2024-10-25 15:24:25.873691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.158 [2024-10-25 15:24:25.873767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:43.158 [2024-10-25 15:24:25.873789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.158 [2024-10-25 15:24:25.873800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.418 [2024-10-25 15:24:25.974813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.418 [2024-10-25 15:24:25.975088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:43.418 [2024-10-25 15:24:25.975121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.418 [2024-10-25 15:24:25.975132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.418 [2024-10-25 15:24:25.975274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.418 [2024-10-25 15:24:25.975289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:43.418 [2024-10-25 15:24:25.975310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.418 [2024-10-25 15:24:25.975320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.418 [2024-10-25 15:24:25.975356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.418 [2024-10-25 15:24:25.975368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:43.418 [2024-10-25 15:24:25.975383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.418 [2024-10-25 15:24:25.975393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.418 [2024-10-25 15:24:25.975514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.418 [2024-10-25 15:24:25.975532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:43.418 [2024-10-25 15:24:25.975547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.418 [2024-10-25 15:24:25.975557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.418 [2024-10-25 15:24:25.975601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.418 [2024-10-25 15:24:25.975614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:43.418 [2024-10-25 15:24:25.975629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.418 [2024-10-25 15:24:25.975639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.418 [2024-10-25 15:24:25.975682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.418 [2024-10-25 15:24:25.975699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:43.418 [2024-10-25 15:24:25.975718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.418 [2024-10-25 15:24:25.975729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.418 [2024-10-25 15:24:25.975778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.418 [2024-10-25 15:24:25.975790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:43.418 [2024-10-25 15:24:25.975805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.418 [2024-10-25 15:24:25.975816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.418 [2024-10-25 15:24:25.975961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 422.926 ms, result 0 00:19:44.355 15:24:26 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:44.355 [2024-10-25 15:24:27.076230] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:19:44.355 [2024-10-25 15:24:27.076538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76002 ] 00:19:44.613 [2024-10-25 15:24:27.259835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.871 [2024-10-25 15:24:27.375830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.130 [2024-10-25 15:24:27.708565] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:45.130 [2024-10-25 15:24:27.708637] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:45.390 [2024-10-25 15:24:27.870228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.390 [2024-10-25 15:24:27.870422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:45.390 [2024-10-25 15:24:27.870447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:45.390 [2024-10-25 15:24:27.870458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.390 [2024-10-25 15:24:27.873594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.391 [2024-10-25 15:24:27.873733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:45.391 [2024-10-25 15:24:27.873754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.111 ms 00:19:45.391 [2024-10-25 15:24:27.873765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.391 [2024-10-25 15:24:27.873869] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:45.391 [2024-10-25 15:24:27.874822] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:45.391 [2024-10-25 15:24:27.874854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.391 [2024-10-25 15:24:27.874866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:45.391 [2024-10-25 15:24:27.874877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:19:45.391 [2024-10-25 15:24:27.874887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.391 [2024-10-25 15:24:27.876368] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:45.391 [2024-10-25 15:24:27.895637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.391 [2024-10-25 15:24:27.895676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:45.391 [2024-10-25 15:24:27.895695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.300 ms 00:19:45.391 [2024-10-25 15:24:27.895706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.391 [2024-10-25 15:24:27.895804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.391 [2024-10-25 15:24:27.895818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:45.391 [2024-10-25 15:24:27.895829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:45.391 [2024-10-25 15:24:27.895840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.391 [2024-10-25 15:24:27.902505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.391 [2024-10-25 15:24:27.902538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:45.391 [2024-10-25 15:24:27.902550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.633 ms 00:19:45.391 [2024-10-25 15:24:27.902560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.391 [2024-10-25 15:24:27.902657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.391 [2024-10-25 15:24:27.902672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:45.391 [2024-10-25 15:24:27.902683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:45.391 [2024-10-25 15:24:27.902693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.391 [2024-10-25 15:24:27.902722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.391 [2024-10-25 15:24:27.902733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:45.391 [2024-10-25 15:24:27.902748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:45.391 [2024-10-25 15:24:27.902758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.391 [2024-10-25 15:24:27.902782] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:45.391 [2024-10-25 15:24:27.907531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.391 [2024-10-25 15:24:27.907564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:45.391 [2024-10-25 15:24:27.907576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.763 ms 00:19:45.391 [2024-10-25 15:24:27.907587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.391 [2024-10-25 15:24:27.907653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.391 [2024-10-25 15:24:27.907665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:45.391 [2024-10-25 15:24:27.907676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:45.391 [2024-10-25 15:24:27.907686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.391 [2024-10-25 15:24:27.907705] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:45.391 [2024-10-25 15:24:27.907726] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:45.391 [2024-10-25 15:24:27.907766] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:45.391 [2024-10-25 15:24:27.907783] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:45.391 [2024-10-25 15:24:27.907871] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:45.391 [2024-10-25 15:24:27.907885] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:45.391 [2024-10-25 15:24:27.907898] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:45.391 [2024-10-25 15:24:27.907911] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:45.391 [2024-10-25 15:24:27.907923] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:45.391 [2024-10-25 15:24:27.907938] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:45.391 [2024-10-25 15:24:27.907948] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:45.391 [2024-10-25 15:24:27.907959] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:45.391 [2024-10-25 15:24:27.907969] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:45.391 [2024-10-25 15:24:27.907979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.391 [2024-10-25 15:24:27.907989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:45.391 [2024-10-25 15:24:27.907999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:19:45.391 [2024-10-25 15:24:27.908009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.391 [2024-10-25 15:24:27.908085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.391 [2024-10-25 15:24:27.908096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:45.391 [2024-10-25 15:24:27.908107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:45.391 [2024-10-25 15:24:27.908120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.391 [2024-10-25 15:24:27.908227] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:45.391 [2024-10-25 15:24:27.908242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:45.391 [2024-10-25 15:24:27.908252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:45.391 [2024-10-25 15:24:27.908263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.391 [2024-10-25 15:24:27.908273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:45.391 [2024-10-25 15:24:27.908282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:45.391 [2024-10-25 15:24:27.908292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:45.391 [2024-10-25 15:24:27.908302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:45.391 [2024-10-25 15:24:27.908311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:45.391 [2024-10-25 15:24:27.908320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:45.391 [2024-10-25 15:24:27.908330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:45.391 [2024-10-25 15:24:27.908340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:45.391 [2024-10-25 15:24:27.908349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:45.391 [2024-10-25 15:24:27.908369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:45.391 [2024-10-25 15:24:27.908379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:45.391 [2024-10-25 15:24:27.908388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.391 [2024-10-25 15:24:27.908397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:45.391 [2024-10-25 15:24:27.908406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:45.391 [2024-10-25 15:24:27.908420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.391 [2024-10-25 15:24:27.908430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:45.391 [2024-10-25 15:24:27.908439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:45.391 [2024-10-25 15:24:27.908448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:45.391 [2024-10-25 15:24:27.908457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:45.391 [2024-10-25 15:24:27.908465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:45.391 [2024-10-25 15:24:27.908474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:45.391 [2024-10-25 15:24:27.908483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:45.391 [2024-10-25 15:24:27.908492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:45.391 [2024-10-25 15:24:27.908501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:45.391 [2024-10-25 15:24:27.908510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:45.391 [2024-10-25 15:24:27.908519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:45.391 [2024-10-25 15:24:27.908527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:45.391 [2024-10-25 15:24:27.908536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:45.391 [2024-10-25 15:24:27.908545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:45.391 [2024-10-25 15:24:27.908554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:45.391 [2024-10-25 15:24:27.908562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:45.391 [2024-10-25 15:24:27.908571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:45.391 [2024-10-25 15:24:27.908580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:45.391 [2024-10-25 15:24:27.908588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:45.391 [2024-10-25 15:24:27.908597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:45.391 [2024-10-25 15:24:27.908606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.391 [2024-10-25 15:24:27.908614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:45.391 [2024-10-25 15:24:27.908623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:45.391 [2024-10-25 15:24:27.908633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.391 [2024-10-25 15:24:27.908642] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:45.391 [2024-10-25 15:24:27.908652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:45.391 [2024-10-25 15:24:27.908661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:45.391 [2024-10-25 15:24:27.908671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.391 [2024-10-25 15:24:27.908684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:45.391 [2024-10-25 15:24:27.908694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:45.392 [2024-10-25 15:24:27.908702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:45.392 [2024-10-25 15:24:27.908711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:45.392 [2024-10-25 15:24:27.908720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:45.392 [2024-10-25 15:24:27.908729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:45.392 [2024-10-25 15:24:27.908740] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:45.392 [2024-10-25 15:24:27.908753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:45.392 [2024-10-25 15:24:27.908764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:45.392 [2024-10-25 15:24:27.908774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:45.392 [2024-10-25 15:24:27.908784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:45.392 [2024-10-25 15:24:27.908810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:45.392 [2024-10-25 15:24:27.908822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:45.392 [2024-10-25 15:24:27.908832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:45.392 [2024-10-25 15:24:27.908843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:45.392 [2024-10-25 15:24:27.908853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:45.392 [2024-10-25 15:24:27.908863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:45.392 [2024-10-25 15:24:27.908874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:45.392 [2024-10-25 15:24:27.908884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:45.392 [2024-10-25 15:24:27.908895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:45.392 [2024-10-25 15:24:27.908905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:45.392 [2024-10-25 15:24:27.908916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:45.392 [2024-10-25 15:24:27.908926] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:45.392 [2024-10-25 15:24:27.908938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:45.392 [2024-10-25 15:24:27.908949] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:45.392 [2024-10-25 15:24:27.908959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:45.392 [2024-10-25 15:24:27.908981] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:45.392 [2024-10-25 15:24:27.908992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:45.392 [2024-10-25 15:24:27.909003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.392 [2024-10-25 15:24:27.909013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:45.392 [2024-10-25 15:24:27.909023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 00:19:45.392 [2024-10-25 15:24:27.909036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.392 [2024-10-25 15:24:27.950289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.392 [2024-10-25 15:24:27.950467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:45.392 [2024-10-25 15:24:27.950491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.248 ms 00:19:45.392 [2024-10-25 15:24:27.950502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.392 [2024-10-25 15:24:27.950652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.392 [2024-10-25 15:24:27.950665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:45.392 [2024-10-25 15:24:27.950682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:45.392 [2024-10-25 15:24:27.950692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.392 [2024-10-25 15:24:28.001663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.392 [2024-10-25 15:24:28.001707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:45.392 [2024-10-25 15:24:28.001722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.030 ms 00:19:45.392 [2024-10-25 15:24:28.001733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.392 [2024-10-25 15:24:28.001862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.392 [2024-10-25 15:24:28.001875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:45.392 [2024-10-25 15:24:28.001886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:45.392 [2024-10-25 15:24:28.001896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.392 [2024-10-25 15:24:28.002358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.392 [2024-10-25 15:24:28.002372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:45.392 [2024-10-25 15:24:28.002383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:19:45.392 [2024-10-25 15:24:28.002393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.392 [2024-10-25 15:24:28.002518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.392 [2024-10-25 15:24:28.002543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:45.392 [2024-10-25 15:24:28.002553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:19:45.392 [2024-10-25 15:24:28.002563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.392 [2024-10-25 15:24:28.020626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.392 [2024-10-25 15:24:28.020667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:45.392 [2024-10-25 15:24:28.020682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.069 ms 00:19:45.392 [2024-10-25 15:24:28.020693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.392 [2024-10-25 15:24:28.039571] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:45.392 [2024-10-25 15:24:28.039613] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:45.392 [2024-10-25 15:24:28.039628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.392 [2024-10-25 15:24:28.039639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:45.392 [2024-10-25 15:24:28.039651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.837 ms 00:19:45.392 [2024-10-25 15:24:28.039661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.392 [2024-10-25 15:24:28.069628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.392 [2024-10-25 15:24:28.069696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:45.392 [2024-10-25 15:24:28.069712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.931 ms 00:19:45.392 [2024-10-25 15:24:28.069723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.392 [2024-10-25 15:24:28.088715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.392 [2024-10-25 15:24:28.088756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:45.392 [2024-10-25 15:24:28.088769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.936 ms 00:19:45.392 [2024-10-25 15:24:28.088779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.392 [2024-10-25 15:24:28.107063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.392 [2024-10-25 15:24:28.107242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:45.392 [2024-10-25 15:24:28.107264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.234 ms 00:19:45.392 [2024-10-25 15:24:28.107274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.392 [2024-10-25 15:24:28.108093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.392 [2024-10-25 15:24:28.108125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:45.392 [2024-10-25 15:24:28.108138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:19:45.392 [2024-10-25 15:24:28.108148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.652 [2024-10-25 15:24:28.193686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.652 [2024-10-25 15:24:28.193759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:45.652 [2024-10-25 15:24:28.193777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.648 ms 00:19:45.652 [2024-10-25 15:24:28.193787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.652 [2024-10-25 15:24:28.204940] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:45.652 [2024-10-25 15:24:28.221357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.652 [2024-10-25 15:24:28.221417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:45.652 [2024-10-25 15:24:28.221434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.497 ms 00:19:45.652 [2024-10-25 15:24:28.221445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.652 [2024-10-25 15:24:28.221578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.653 [2024-10-25 15:24:28.221596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:45.653 [2024-10-25 15:24:28.221608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:45.653 [2024-10-25 15:24:28.221618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.653 [2024-10-25 15:24:28.221674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.653 [2024-10-25 15:24:28.221685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:45.653 [2024-10-25 15:24:28.221696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:45.653 [2024-10-25 15:24:28.221706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.653 [2024-10-25 15:24:28.221733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.653 [2024-10-25 15:24:28.221744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:45.653 [2024-10-25 15:24:28.221757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:45.653 [2024-10-25 15:24:28.221767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.653 [2024-10-25 15:24:28.221803] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:45.653 [2024-10-25 15:24:28.221815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.653 [2024-10-25 15:24:28.221826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:45.653 [2024-10-25 15:24:28.221836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:45.653 [2024-10-25 15:24:28.221846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.653 [2024-10-25 15:24:28.258759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.653 [2024-10-25 15:24:28.258813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:45.653 [2024-10-25 15:24:28.258828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.950 ms 00:19:45.653 [2024-10-25 15:24:28.258840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.653 [2024-10-25 15:24:28.258963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.653 [2024-10-25 15:24:28.258977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:45.653 [2024-10-25 15:24:28.258989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:45.653 [2024-10-25 15:24:28.258999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.653 [2024-10-25 15:24:28.259957] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:45.653 [2024-10-25 15:24:28.264371] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 390.046 ms, result 0 00:19:45.653 [2024-10-25 15:24:28.265271] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:45.653 [2024-10-25 15:24:28.283597] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:47.032  [2024-10-25T15:24:30.697Z] Copying: 33/256 [MB] (33 MBps) [2024-10-25T15:24:31.675Z] Copying: 62/256 [MB] (29 MBps) [2024-10-25T15:24:32.613Z] Copying: 89/256 [MB] (27 MBps) [2024-10-25T15:24:33.552Z] Copying: 116/256 [MB] (27 MBps) [2024-10-25T15:24:34.500Z] Copying: 144/256 [MB] (27 MBps) [2024-10-25T15:24:35.438Z] Copying: 171/256 [MB] (27 MBps) [2024-10-25T15:24:36.375Z] Copying: 200/256 [MB] (28 MBps) [2024-10-25T15:24:37.753Z] Copying: 228/256 [MB] (28 MBps) [2024-10-25T15:24:38.031Z] Copying: 256/256 [MB] (average 28 MBps)[2024-10-25 15:24:37.766370] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:55.303 [2024-10-25 15:24:37.789095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.303 [2024-10-25 15:24:37.789457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:55.303 [2024-10-25 15:24:37.789563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:55.303 [2024-10-25 15:24:37.789602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.303 [2024-10-25 15:24:37.789733] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:55.303 [2024-10-25 15:24:37.794053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.303 [2024-10-25 15:24:37.794586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:55.303 [2024-10-25 15:24:37.794690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.260 ms 00:19:55.303 [2024-10-25 15:24:37.794729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.303 [2024-10-25 15:24:37.795009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.303 [2024-10-25 15:24:37.795050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:55.303 [2024-10-25 15:24:37.795082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:19:55.303 [2024-10-25 15:24:37.795168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.303 [2024-10-25 15:24:37.798068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.303 [2024-10-25 15:24:37.798174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:55.303 [2024-10-25 15:24:37.798309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.843 ms 00:19:55.303 [2024-10-25 15:24:37.798347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.303 [2024-10-25 15:24:37.804023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.303 [2024-10-25 15:24:37.804153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:55.303 [2024-10-25 15:24:37.804317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.638 ms 00:19:55.303 [2024-10-25 15:24:37.804357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.303 [2024-10-25 15:24:37.841798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.303 [2024-10-25 15:24:37.841943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:55.303 [2024-10-25 15:24:37.842058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.406 ms 00:19:55.303 [2024-10-25 15:24:37.842095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.303 [2024-10-25 15:24:37.863313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.303 [2024-10-25 15:24:37.863460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:55.303 [2024-10-25 15:24:37.863552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.184 ms 00:19:55.303 [2024-10-25 15:24:37.863589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.303 [2024-10-25 15:24:37.863749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.303 [2024-10-25 15:24:37.863862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:55.303 [2024-10-25 15:24:37.863958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:55.303 [2024-10-25 15:24:37.863989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.303 [2024-10-25 15:24:37.900248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.303 [2024-10-25 15:24:37.900393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:55.303 [2024-10-25 15:24:37.900468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.267 ms 00:19:55.303 [2024-10-25 15:24:37.900503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.303 [2024-10-25 15:24:37.935964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.303 [2024-10-25 15:24:37.936123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:55.303 [2024-10-25 15:24:37.936258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.450 ms 00:19:55.303 [2024-10-25 15:24:37.936297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.303 [2024-10-25 15:24:37.971367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.303 [2024-10-25 15:24:37.971507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:55.303 [2024-10-25 15:24:37.971579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.059 ms 00:19:55.303 [2024-10-25 15:24:37.971613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.303 [2024-10-25 15:24:38.007262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.303 [2024-10-25 15:24:38.007400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:55.303 [2024-10-25 15:24:38.007471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.598 ms 00:19:55.303 [2024-10-25 15:24:38.007506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.303 [2024-10-25 15:24:38.007570] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:55.303 [2024-10-25 15:24:38.007616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:55.303 [2024-10-25 15:24:38.007665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:55.303 [2024-10-25 15:24:38.007764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:55.303 [2024-10-25 15:24:38.007818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:55.303 [2024-10-25 15:24:38.007830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:55.303 [2024-10-25 15:24:38.007841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:55.303 [2024-10-25 15:24:38.007852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:55.303 [2024-10-25 15:24:38.007863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:55.303 [2024-10-25 15:24:38.007874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:55.303 [2024-10-25 15:24:38.007885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:55.303 [2024-10-25 15:24:38.007895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:55.303 [2024-10-25 15:24:38.007905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:55.303 [2024-10-25 15:24:38.007915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.007926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.007936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.007946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.007957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.007967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.007978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.007989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.007999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:55.304 [2024-10-25 15:24:38.008864] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:55.304 [2024-10-25 15:24:38.008874] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 807ee30e-b752-4504-9751-1143cda47acc 00:19:55.304 [2024-10-25 15:24:38.008884] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:55.304 [2024-10-25 15:24:38.008894] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:55.304 [2024-10-25 15:24:38.008904] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:55.304 [2024-10-25 15:24:38.008916] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:55.304 [2024-10-25 15:24:38.008926] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:55.304 [2024-10-25 15:24:38.008936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:55.305 [2024-10-25 15:24:38.008947] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:55.305 [2024-10-25 15:24:38.008956] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:55.305 [2024-10-25 15:24:38.008965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:55.305 [2024-10-25 15:24:38.008975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.305 [2024-10-25 15:24:38.008986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:55.305 [2024-10-25 15:24:38.008997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.408 ms 00:19:55.305 [2024-10-25 15:24:38.009011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.305 [2024-10-25 15:24:38.028899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.305 [2024-10-25 15:24:38.028936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:55.305 [2024-10-25 15:24:38.028949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.896 ms 00:19:55.305 [2024-10-25 15:24:38.028959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.305 [2024-10-25 15:24:38.029552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.305 [2024-10-25 15:24:38.029572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:55.305 [2024-10-25 15:24:38.029583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:19:55.305 [2024-10-25 15:24:38.029593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-10-25 15:24:38.085338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.564 [2024-10-25 15:24:38.085380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:55.564 [2024-10-25 15:24:38.085395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.564 [2024-10-25 15:24:38.085407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-10-25 15:24:38.085508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.564 [2024-10-25 15:24:38.085523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:55.564 [2024-10-25 15:24:38.085534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.564 [2024-10-25 15:24:38.085543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-10-25 15:24:38.085591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.564 [2024-10-25 15:24:38.085604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:55.564 [2024-10-25 15:24:38.085614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.564 [2024-10-25 15:24:38.085624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-10-25 15:24:38.085642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.564 [2024-10-25 15:24:38.085652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:55.564 [2024-10-25 15:24:38.085667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.564 [2024-10-25 15:24:38.085677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.564 [2024-10-25 15:24:38.210793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.564 [2024-10-25 15:24:38.210856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:55.564 [2024-10-25 15:24:38.210873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.564 [2024-10-25 15:24:38.210884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.823 [2024-10-25 15:24:38.311431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.823 [2024-10-25 15:24:38.311488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:55.823 [2024-10-25 15:24:38.311509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.823 [2024-10-25 15:24:38.311520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.823 [2024-10-25 15:24:38.311607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.823 [2024-10-25 15:24:38.311618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:55.823 [2024-10-25 15:24:38.311628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.823 [2024-10-25 15:24:38.311639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.823 [2024-10-25 15:24:38.311667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.823 [2024-10-25 15:24:38.311678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:55.823 [2024-10-25 15:24:38.311689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.823 [2024-10-25 15:24:38.311699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.823 [2024-10-25 15:24:38.311800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.823 [2024-10-25 15:24:38.311813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:55.823 [2024-10-25 15:24:38.311824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.823 [2024-10-25 15:24:38.311834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.823 [2024-10-25 15:24:38.311870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.823 [2024-10-25 15:24:38.311883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:55.823 [2024-10-25 15:24:38.311893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.823 [2024-10-25 15:24:38.311903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.823 [2024-10-25 15:24:38.311946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.823 [2024-10-25 15:24:38.311957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:55.823 [2024-10-25 15:24:38.311967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.823 [2024-10-25 15:24:38.311977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.823 [2024-10-25 15:24:38.312020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.823 [2024-10-25 15:24:38.312031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:55.823 [2024-10-25 15:24:38.312042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.823 [2024-10-25 15:24:38.312052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.823 [2024-10-25 15:24:38.312210] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 523.948 ms, result 0 00:19:56.760 00:19:56.760 00:19:56.760 15:24:39 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:57.329 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:57.329 15:24:39 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:57.329 15:24:39 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:19:57.329 15:24:39 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:57.329 15:24:39 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:57.329 15:24:39 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:57.329 15:24:39 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:57.329 15:24:39 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 75933 00:19:57.329 15:24:39 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 75933 ']' 00:19:57.329 15:24:39 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 75933 00:19:57.329 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (75933) - No such process 00:19:57.329 Process with pid 75933 is not found 00:19:57.329 15:24:39 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 75933 is not found' 00:19:57.329 ************************************ 00:19:57.329 END TEST ftl_trim 00:19:57.329 ************************************ 00:19:57.329 00:19:57.329 real 1m7.718s 00:19:57.329 user 1m34.346s 00:19:57.329 sys 0m6.614s 00:19:57.329 15:24:39 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:57.329 15:24:39 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:57.329 15:24:39 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:57.329 15:24:39 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:57.329 15:24:39 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:57.329 15:24:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:57.329 ************************************ 00:19:57.329 START TEST ftl_restore 00:19:57.329 ************************************ 00:19:57.329 15:24:39 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:57.589 * Looking for test storage... 00:19:57.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:57.589 15:24:40 ftl.ftl_restore -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:19:57.589 15:24:40 ftl.ftl_restore -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:19:57.589 15:24:40 ftl.ftl_restore -- common/autotest_common.sh@1689 -- # lcov --version 00:19:57.589 15:24:40 ftl.ftl_restore -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:57.589 15:24:40 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:19:57.589 15:24:40 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:57.589 15:24:40 ftl.ftl_restore -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:19:57.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.589 --rc genhtml_branch_coverage=1 00:19:57.589 --rc genhtml_function_coverage=1 00:19:57.589 --rc genhtml_legend=1 00:19:57.589 --rc geninfo_all_blocks=1 00:19:57.589 --rc geninfo_unexecuted_blocks=1 00:19:57.589 00:19:57.589 ' 00:19:57.589 15:24:40 ftl.ftl_restore -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:19:57.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.589 --rc genhtml_branch_coverage=1 00:19:57.589 --rc genhtml_function_coverage=1 00:19:57.589 --rc genhtml_legend=1 00:19:57.589 --rc geninfo_all_blocks=1 00:19:57.589 --rc geninfo_unexecuted_blocks=1 00:19:57.589 00:19:57.589 ' 00:19:57.589 15:24:40 ftl.ftl_restore -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:19:57.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.590 --rc genhtml_branch_coverage=1 00:19:57.590 --rc genhtml_function_coverage=1 00:19:57.590 --rc genhtml_legend=1 00:19:57.590 --rc geninfo_all_blocks=1 00:19:57.590 --rc geninfo_unexecuted_blocks=1 00:19:57.590 00:19:57.590 ' 00:19:57.590 15:24:40 ftl.ftl_restore -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:19:57.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.590 --rc genhtml_branch_coverage=1 00:19:57.590 --rc genhtml_function_coverage=1 00:19:57.590 --rc genhtml_legend=1 00:19:57.590 --rc geninfo_all_blocks=1 00:19:57.590 --rc geninfo_unexecuted_blocks=1 00:19:57.590 00:19:57.590 ' 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.TrqgE6bXWp 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=76200 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:57.590 15:24:40 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 76200 00:19:57.590 15:24:40 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 76200 ']' 00:19:57.590 15:24:40 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.590 15:24:40 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:57.590 15:24:40 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.590 15:24:40 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:57.590 15:24:40 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:57.850 [2024-10-25 15:24:40.351056] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:19:57.850 [2024-10-25 15:24:40.351199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76200 ] 00:19:57.850 [2024-10-25 15:24:40.533672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.108 [2024-10-25 15:24:40.640681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.044 15:24:41 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:59.044 15:24:41 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:19:59.044 15:24:41 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:59.044 15:24:41 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:19:59.044 15:24:41 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:59.044 15:24:41 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:19:59.044 15:24:41 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:19:59.044 15:24:41 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:59.303 15:24:41 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:59.303 15:24:41 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:19:59.303 15:24:41 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:59.303 15:24:41 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:59.303 15:24:41 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:59.303 15:24:41 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:59.303 15:24:41 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:59.303 15:24:41 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:59.563 15:24:42 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:59.563 { 00:19:59.563 "name": "nvme0n1", 00:19:59.563 "aliases": [ 00:19:59.563 "082b4cf3-7d0f-4e54-af93-e1578d0bfde6" 00:19:59.563 ], 00:19:59.563 "product_name": "NVMe disk", 00:19:59.563 "block_size": 4096, 00:19:59.563 "num_blocks": 1310720, 00:19:59.563 "uuid": "082b4cf3-7d0f-4e54-af93-e1578d0bfde6", 00:19:59.563 "numa_id": -1, 00:19:59.563 "assigned_rate_limits": { 00:19:59.563 "rw_ios_per_sec": 0, 00:19:59.563 "rw_mbytes_per_sec": 0, 00:19:59.563 "r_mbytes_per_sec": 0, 00:19:59.563 "w_mbytes_per_sec": 0 00:19:59.563 }, 00:19:59.563 "claimed": true, 00:19:59.563 "claim_type": "read_many_write_one", 00:19:59.563 "zoned": false, 00:19:59.563 "supported_io_types": { 00:19:59.563 "read": true, 00:19:59.563 "write": true, 00:19:59.563 "unmap": true, 00:19:59.563 "flush": true, 00:19:59.563 "reset": true, 00:19:59.563 "nvme_admin": true, 00:19:59.563 "nvme_io": true, 00:19:59.563 "nvme_io_md": false, 00:19:59.563 "write_zeroes": true, 00:19:59.563 "zcopy": false, 00:19:59.563 "get_zone_info": false, 00:19:59.563 "zone_management": false, 00:19:59.563 "zone_append": false, 00:19:59.563 "compare": true, 00:19:59.563 "compare_and_write": false, 00:19:59.563 "abort": true, 00:19:59.563 "seek_hole": false, 00:19:59.563 "seek_data": false, 00:19:59.563 "copy": true, 00:19:59.563 "nvme_iov_md": false 00:19:59.563 }, 00:19:59.563 "driver_specific": { 00:19:59.563 "nvme": [ 00:19:59.563 { 00:19:59.563 "pci_address": "0000:00:11.0", 00:19:59.563 "trid": { 00:19:59.563 "trtype": "PCIe", 00:19:59.563 "traddr": "0000:00:11.0" 00:19:59.563 }, 00:19:59.563 "ctrlr_data": { 00:19:59.563 "cntlid": 0, 00:19:59.563 "vendor_id": "0x1b36", 00:19:59.563 "model_number": "QEMU NVMe Ctrl", 00:19:59.563 "serial_number": "12341", 00:19:59.563 "firmware_revision": "8.0.0", 00:19:59.563 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:59.563 "oacs": { 00:19:59.563 "security": 0, 00:19:59.563 "format": 1, 00:19:59.563 "firmware": 0, 00:19:59.563 "ns_manage": 1 00:19:59.563 }, 00:19:59.563 "multi_ctrlr": false, 00:19:59.563 "ana_reporting": false 00:19:59.563 }, 00:19:59.563 "vs": { 00:19:59.563 "nvme_version": "1.4" 00:19:59.563 }, 00:19:59.563 "ns_data": { 00:19:59.563 "id": 1, 00:19:59.563 "can_share": false 00:19:59.563 } 00:19:59.563 } 00:19:59.563 ], 00:19:59.563 "mp_policy": "active_passive" 00:19:59.563 } 00:19:59.563 } 00:19:59.563 ]' 00:19:59.563 15:24:42 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:59.563 15:24:42 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:59.563 15:24:42 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:59.563 15:24:42 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:59.563 15:24:42 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:59.563 15:24:42 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:19:59.563 15:24:42 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:19:59.563 15:24:42 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:59.563 15:24:42 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:19:59.563 15:24:42 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:59.563 15:24:42 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:59.823 15:24:42 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=7af62b3b-b4df-4fa4-8a94-6082b2067501 00:19:59.823 15:24:42 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:19:59.823 15:24:42 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7af62b3b-b4df-4fa4-8a94-6082b2067501 00:20:00.082 15:24:42 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:00.082 15:24:42 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=5adbfdb8-8d94-4eb8-9f2f-69782688ddb5 00:20:00.082 15:24:42 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5adbfdb8-8d94-4eb8-9f2f-69782688ddb5 00:20:00.341 15:24:42 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=59a8ffa4-e808-480f-b279-60c652af40d1 00:20:00.341 15:24:42 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:00.341 15:24:42 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 59a8ffa4-e808-480f-b279-60c652af40d1 00:20:00.341 15:24:42 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:00.341 15:24:42 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:00.341 15:24:42 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=59a8ffa4-e808-480f-b279-60c652af40d1 00:20:00.341 15:24:42 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:00.341 15:24:42 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 59a8ffa4-e808-480f-b279-60c652af40d1 00:20:00.341 15:24:42 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=59a8ffa4-e808-480f-b279-60c652af40d1 00:20:00.341 15:24:42 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:00.341 15:24:42 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:20:00.341 15:24:42 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:20:00.341 15:24:42 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 59a8ffa4-e808-480f-b279-60c652af40d1 00:20:00.601 15:24:43 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:00.601 { 00:20:00.601 "name": "59a8ffa4-e808-480f-b279-60c652af40d1", 00:20:00.601 "aliases": [ 00:20:00.601 "lvs/nvme0n1p0" 00:20:00.601 ], 00:20:00.601 "product_name": "Logical Volume", 00:20:00.601 "block_size": 4096, 00:20:00.601 "num_blocks": 26476544, 00:20:00.601 "uuid": "59a8ffa4-e808-480f-b279-60c652af40d1", 00:20:00.601 "assigned_rate_limits": { 00:20:00.601 "rw_ios_per_sec": 0, 00:20:00.601 "rw_mbytes_per_sec": 0, 00:20:00.601 "r_mbytes_per_sec": 0, 00:20:00.601 "w_mbytes_per_sec": 0 00:20:00.601 }, 00:20:00.601 "claimed": false, 00:20:00.601 "zoned": false, 00:20:00.601 "supported_io_types": { 00:20:00.601 "read": true, 00:20:00.601 "write": true, 00:20:00.601 "unmap": true, 00:20:00.601 "flush": false, 00:20:00.601 "reset": true, 00:20:00.601 "nvme_admin": false, 00:20:00.601 "nvme_io": false, 00:20:00.601 "nvme_io_md": false, 00:20:00.601 "write_zeroes": true, 00:20:00.601 "zcopy": false, 00:20:00.601 "get_zone_info": false, 00:20:00.601 "zone_management": false, 00:20:00.601 "zone_append": false, 00:20:00.601 "compare": false, 00:20:00.601 "compare_and_write": false, 00:20:00.601 "abort": false, 00:20:00.601 "seek_hole": true, 00:20:00.601 "seek_data": true, 00:20:00.601 "copy": false, 00:20:00.601 "nvme_iov_md": false 00:20:00.601 }, 00:20:00.601 "driver_specific": { 00:20:00.601 "lvol": { 00:20:00.601 "lvol_store_uuid": "5adbfdb8-8d94-4eb8-9f2f-69782688ddb5", 00:20:00.601 "base_bdev": "nvme0n1", 00:20:00.601 "thin_provision": true, 00:20:00.601 "num_allocated_clusters": 0, 00:20:00.601 "snapshot": false, 00:20:00.601 "clone": false, 00:20:00.601 "esnap_clone": false 00:20:00.601 } 00:20:00.601 } 00:20:00.601 } 00:20:00.601 ]' 00:20:00.601 15:24:43 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:00.601 15:24:43 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:20:00.601 15:24:43 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:00.860 15:24:43 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:00.860 15:24:43 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:00.860 15:24:43 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:20:00.860 15:24:43 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:00.860 15:24:43 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:00.860 15:24:43 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:01.119 15:24:43 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:01.119 15:24:43 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:01.119 15:24:43 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 59a8ffa4-e808-480f-b279-60c652af40d1 00:20:01.119 15:24:43 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=59a8ffa4-e808-480f-b279-60c652af40d1 00:20:01.119 15:24:43 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:01.119 15:24:43 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:20:01.119 15:24:43 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:20:01.119 15:24:43 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 59a8ffa4-e808-480f-b279-60c652af40d1 00:20:01.378 15:24:43 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:01.378 { 00:20:01.378 "name": "59a8ffa4-e808-480f-b279-60c652af40d1", 00:20:01.378 "aliases": [ 00:20:01.378 "lvs/nvme0n1p0" 00:20:01.378 ], 00:20:01.378 "product_name": "Logical Volume", 00:20:01.378 "block_size": 4096, 00:20:01.378 "num_blocks": 26476544, 00:20:01.378 "uuid": "59a8ffa4-e808-480f-b279-60c652af40d1", 00:20:01.378 "assigned_rate_limits": { 00:20:01.378 "rw_ios_per_sec": 0, 00:20:01.378 "rw_mbytes_per_sec": 0, 00:20:01.378 "r_mbytes_per_sec": 0, 00:20:01.378 "w_mbytes_per_sec": 0 00:20:01.378 }, 00:20:01.378 "claimed": false, 00:20:01.378 "zoned": false, 00:20:01.378 "supported_io_types": { 00:20:01.378 "read": true, 00:20:01.378 "write": true, 00:20:01.378 "unmap": true, 00:20:01.378 "flush": false, 00:20:01.378 "reset": true, 00:20:01.378 "nvme_admin": false, 00:20:01.378 "nvme_io": false, 00:20:01.378 "nvme_io_md": false, 00:20:01.378 "write_zeroes": true, 00:20:01.378 "zcopy": false, 00:20:01.378 "get_zone_info": false, 00:20:01.378 "zone_management": false, 00:20:01.378 "zone_append": false, 00:20:01.378 "compare": false, 00:20:01.378 "compare_and_write": false, 00:20:01.378 "abort": false, 00:20:01.378 "seek_hole": true, 00:20:01.378 "seek_data": true, 00:20:01.378 "copy": false, 00:20:01.378 "nvme_iov_md": false 00:20:01.378 }, 00:20:01.378 "driver_specific": { 00:20:01.378 "lvol": { 00:20:01.378 "lvol_store_uuid": "5adbfdb8-8d94-4eb8-9f2f-69782688ddb5", 00:20:01.378 "base_bdev": "nvme0n1", 00:20:01.378 "thin_provision": true, 00:20:01.378 "num_allocated_clusters": 0, 00:20:01.378 "snapshot": false, 00:20:01.378 "clone": false, 00:20:01.378 "esnap_clone": false 00:20:01.378 } 00:20:01.378 } 00:20:01.378 } 00:20:01.378 ]' 00:20:01.378 15:24:43 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:01.378 15:24:43 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:20:01.378 15:24:43 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:01.378 15:24:43 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:01.378 15:24:43 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:01.378 15:24:43 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:20:01.378 15:24:43 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:01.378 15:24:43 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:01.637 15:24:44 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:01.637 15:24:44 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 59a8ffa4-e808-480f-b279-60c652af40d1 00:20:01.637 15:24:44 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=59a8ffa4-e808-480f-b279-60c652af40d1 00:20:01.637 15:24:44 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:01.637 15:24:44 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:20:01.637 15:24:44 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:20:01.637 15:24:44 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 59a8ffa4-e808-480f-b279-60c652af40d1 00:20:01.895 15:24:44 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:01.895 { 00:20:01.895 "name": "59a8ffa4-e808-480f-b279-60c652af40d1", 00:20:01.895 "aliases": [ 00:20:01.895 "lvs/nvme0n1p0" 00:20:01.895 ], 00:20:01.895 "product_name": "Logical Volume", 00:20:01.895 "block_size": 4096, 00:20:01.895 "num_blocks": 26476544, 00:20:01.895 "uuid": "59a8ffa4-e808-480f-b279-60c652af40d1", 00:20:01.895 "assigned_rate_limits": { 00:20:01.895 "rw_ios_per_sec": 0, 00:20:01.895 "rw_mbytes_per_sec": 0, 00:20:01.895 "r_mbytes_per_sec": 0, 00:20:01.895 "w_mbytes_per_sec": 0 00:20:01.895 }, 00:20:01.895 "claimed": false, 00:20:01.895 "zoned": false, 00:20:01.895 "supported_io_types": { 00:20:01.895 "read": true, 00:20:01.895 "write": true, 00:20:01.895 "unmap": true, 00:20:01.895 "flush": false, 00:20:01.895 "reset": true, 00:20:01.895 "nvme_admin": false, 00:20:01.895 "nvme_io": false, 00:20:01.895 "nvme_io_md": false, 00:20:01.895 "write_zeroes": true, 00:20:01.895 "zcopy": false, 00:20:01.895 "get_zone_info": false, 00:20:01.895 "zone_management": false, 00:20:01.895 "zone_append": false, 00:20:01.895 "compare": false, 00:20:01.895 "compare_and_write": false, 00:20:01.895 "abort": false, 00:20:01.895 "seek_hole": true, 00:20:01.895 "seek_data": true, 00:20:01.895 "copy": false, 00:20:01.895 "nvme_iov_md": false 00:20:01.895 }, 00:20:01.895 "driver_specific": { 00:20:01.895 "lvol": { 00:20:01.895 "lvol_store_uuid": "5adbfdb8-8d94-4eb8-9f2f-69782688ddb5", 00:20:01.895 "base_bdev": "nvme0n1", 00:20:01.895 "thin_provision": true, 00:20:01.895 "num_allocated_clusters": 0, 00:20:01.895 "snapshot": false, 00:20:01.895 "clone": false, 00:20:01.895 "esnap_clone": false 00:20:01.895 } 00:20:01.895 } 00:20:01.895 } 00:20:01.895 ]' 00:20:01.895 15:24:44 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:01.895 15:24:44 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:20:01.895 15:24:44 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:01.895 15:24:44 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:01.895 15:24:44 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:01.895 15:24:44 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:20:01.895 15:24:44 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:01.895 15:24:44 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 59a8ffa4-e808-480f-b279-60c652af40d1 --l2p_dram_limit 10' 00:20:01.895 15:24:44 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:01.895 15:24:44 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:01.895 15:24:44 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:01.895 15:24:44 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:01.895 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:01.895 15:24:44 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 59a8ffa4-e808-480f-b279-60c652af40d1 --l2p_dram_limit 10 -c nvc0n1p0 00:20:02.156 [2024-10-25 15:24:44.647224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.156 [2024-10-25 15:24:44.647279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:02.156 [2024-10-25 15:24:44.647300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:02.156 [2024-10-25 15:24:44.647312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.156 [2024-10-25 15:24:44.647379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.156 [2024-10-25 15:24:44.647391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:02.156 [2024-10-25 15:24:44.647405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:02.156 [2024-10-25 15:24:44.647415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.156 [2024-10-25 15:24:44.647446] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:02.156 [2024-10-25 15:24:44.648548] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:02.156 [2024-10-25 15:24:44.648576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.156 [2024-10-25 15:24:44.648587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:02.156 [2024-10-25 15:24:44.648601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.141 ms 00:20:02.156 [2024-10-25 15:24:44.648611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.156 [2024-10-25 15:24:44.648657] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 52199eca-5cab-4185-b25a-1e7503e93f9a 00:20:02.156 [2024-10-25 15:24:44.650050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.156 [2024-10-25 15:24:44.650211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:02.156 [2024-10-25 15:24:44.650231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:02.156 [2024-10-25 15:24:44.650247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.156 [2024-10-25 15:24:44.657642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.156 [2024-10-25 15:24:44.657675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:02.156 [2024-10-25 15:24:44.657687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.316 ms 00:20:02.156 [2024-10-25 15:24:44.657703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.156 [2024-10-25 15:24:44.657800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.156 [2024-10-25 15:24:44.657818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:02.156 [2024-10-25 15:24:44.657829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:02.156 [2024-10-25 15:24:44.657846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.156 [2024-10-25 15:24:44.657908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.156 [2024-10-25 15:24:44.657923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:02.156 [2024-10-25 15:24:44.657934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:02.156 [2024-10-25 15:24:44.657946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.156 [2024-10-25 15:24:44.657976] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:02.156 [2024-10-25 15:24:44.662993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.156 [2024-10-25 15:24:44.663024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:02.156 [2024-10-25 15:24:44.663039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.034 ms 00:20:02.156 [2024-10-25 15:24:44.663054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.156 [2024-10-25 15:24:44.663089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.156 [2024-10-25 15:24:44.663100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:02.156 [2024-10-25 15:24:44.663114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:02.156 [2024-10-25 15:24:44.663124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.156 [2024-10-25 15:24:44.663160] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:02.156 [2024-10-25 15:24:44.663301] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:02.156 [2024-10-25 15:24:44.663323] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:02.156 [2024-10-25 15:24:44.663337] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:02.156 [2024-10-25 15:24:44.663352] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:02.156 [2024-10-25 15:24:44.663366] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:02.156 [2024-10-25 15:24:44.663379] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:02.156 [2024-10-25 15:24:44.663390] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:02.156 [2024-10-25 15:24:44.663402] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:02.156 [2024-10-25 15:24:44.663412] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:02.156 [2024-10-25 15:24:44.663428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.156 [2024-10-25 15:24:44.663439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:02.156 [2024-10-25 15:24:44.663452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:20:02.156 [2024-10-25 15:24:44.663473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.156 [2024-10-25 15:24:44.663554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.157 [2024-10-25 15:24:44.663565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:02.157 [2024-10-25 15:24:44.663577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:02.157 [2024-10-25 15:24:44.663587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.157 [2024-10-25 15:24:44.663675] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:02.157 [2024-10-25 15:24:44.663690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:02.157 [2024-10-25 15:24:44.663704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:02.157 [2024-10-25 15:24:44.663714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.157 [2024-10-25 15:24:44.663727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:02.157 [2024-10-25 15:24:44.663736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:02.157 [2024-10-25 15:24:44.663748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:02.157 [2024-10-25 15:24:44.663758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:02.157 [2024-10-25 15:24:44.663770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:02.157 [2024-10-25 15:24:44.663780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:02.157 [2024-10-25 15:24:44.663792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:02.157 [2024-10-25 15:24:44.663802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:02.157 [2024-10-25 15:24:44.663814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:02.157 [2024-10-25 15:24:44.663824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:02.157 [2024-10-25 15:24:44.663836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:02.157 [2024-10-25 15:24:44.663846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.157 [2024-10-25 15:24:44.663860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:02.157 [2024-10-25 15:24:44.663870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:02.157 [2024-10-25 15:24:44.663882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.157 [2024-10-25 15:24:44.663891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:02.157 [2024-10-25 15:24:44.663905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:02.157 [2024-10-25 15:24:44.663914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.157 [2024-10-25 15:24:44.663926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:02.157 [2024-10-25 15:24:44.663936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:02.157 [2024-10-25 15:24:44.663947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.157 [2024-10-25 15:24:44.663957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:02.157 [2024-10-25 15:24:44.663968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:02.157 [2024-10-25 15:24:44.663978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.157 [2024-10-25 15:24:44.663990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:02.157 [2024-10-25 15:24:44.663999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:02.157 [2024-10-25 15:24:44.664010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.157 [2024-10-25 15:24:44.664020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:02.157 [2024-10-25 15:24:44.664034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:02.157 [2024-10-25 15:24:44.664043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:02.157 [2024-10-25 15:24:44.664055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:02.157 [2024-10-25 15:24:44.664064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:02.157 [2024-10-25 15:24:44.664076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:02.157 [2024-10-25 15:24:44.664085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:02.157 [2024-10-25 15:24:44.664097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:02.157 [2024-10-25 15:24:44.664106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.157 [2024-10-25 15:24:44.664117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:02.157 [2024-10-25 15:24:44.664127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:02.157 [2024-10-25 15:24:44.664138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.157 [2024-10-25 15:24:44.664147] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:02.157 [2024-10-25 15:24:44.664159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:02.157 [2024-10-25 15:24:44.664172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:02.157 [2024-10-25 15:24:44.664195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.157 [2024-10-25 15:24:44.664207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:02.157 [2024-10-25 15:24:44.664223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:02.157 [2024-10-25 15:24:44.664233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:02.157 [2024-10-25 15:24:44.664245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:02.157 [2024-10-25 15:24:44.664254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:02.157 [2024-10-25 15:24:44.664266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:02.157 [2024-10-25 15:24:44.664280] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:02.157 [2024-10-25 15:24:44.664295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:02.157 [2024-10-25 15:24:44.664307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:02.157 [2024-10-25 15:24:44.664321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:02.157 [2024-10-25 15:24:44.664331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:02.157 [2024-10-25 15:24:44.664344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:02.157 [2024-10-25 15:24:44.664354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:02.157 [2024-10-25 15:24:44.664367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:02.157 [2024-10-25 15:24:44.664379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:02.157 [2024-10-25 15:24:44.664402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:02.157 [2024-10-25 15:24:44.664412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:02.157 [2024-10-25 15:24:44.664428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:02.157 [2024-10-25 15:24:44.664438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:02.157 [2024-10-25 15:24:44.664450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:02.157 [2024-10-25 15:24:44.664460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:02.157 [2024-10-25 15:24:44.664472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:02.157 [2024-10-25 15:24:44.664482] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:02.157 [2024-10-25 15:24:44.664496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:02.157 [2024-10-25 15:24:44.664512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:02.157 [2024-10-25 15:24:44.664524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:02.157 [2024-10-25 15:24:44.664550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:02.157 [2024-10-25 15:24:44.664563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:02.157 [2024-10-25 15:24:44.664575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.157 [2024-10-25 15:24:44.664588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:02.157 [2024-10-25 15:24:44.664599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:20:02.157 [2024-10-25 15:24:44.664612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.157 [2024-10-25 15:24:44.664654] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:02.157 [2024-10-25 15:24:44.664672] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:06.354 [2024-10-25 15:24:48.269836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.354 [2024-10-25 15:24:48.270113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:06.354 [2024-10-25 15:24:48.270226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3611.032 ms 00:20:06.354 [2024-10-25 15:24:48.270320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.354 [2024-10-25 15:24:48.307030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.354 [2024-10-25 15:24:48.307312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:06.354 [2024-10-25 15:24:48.307410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.445 ms 00:20:06.354 [2024-10-25 15:24:48.307453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.354 [2024-10-25 15:24:48.307727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.354 [2024-10-25 15:24:48.307840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:06.354 [2024-10-25 15:24:48.307926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:06.354 [2024-10-25 15:24:48.307972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.354 [2024-10-25 15:24:48.349534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.354 [2024-10-25 15:24:48.349717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:06.354 [2024-10-25 15:24:48.349870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.486 ms 00:20:06.354 [2024-10-25 15:24:48.349912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.354 [2024-10-25 15:24:48.349969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.354 [2024-10-25 15:24:48.350103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.354 [2024-10-25 15:24:48.350142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:06.354 [2024-10-25 15:24:48.350187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.354 [2024-10-25 15:24:48.350689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.354 [2024-10-25 15:24:48.350808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:06.354 [2024-10-25 15:24:48.350882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:20:06.354 [2024-10-25 15:24:48.350931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.354 [2024-10-25 15:24:48.351172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.354 [2024-10-25 15:24:48.351270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:06.354 [2024-10-25 15:24:48.351336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:06.354 [2024-10-25 15:24:48.351377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.354 [2024-10-25 15:24:48.369789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.354 [2024-10-25 15:24:48.369829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:06.354 [2024-10-25 15:24:48.369844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.305 ms 00:20:06.354 [2024-10-25 15:24:48.369860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.354 [2024-10-25 15:24:48.381752] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:06.354 [2024-10-25 15:24:48.384930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.354 [2024-10-25 15:24:48.384959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:06.354 [2024-10-25 15:24:48.384975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.008 ms 00:20:06.354 [2024-10-25 15:24:48.384986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.354 [2024-10-25 15:24:48.482525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.354 [2024-10-25 15:24:48.482587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:06.354 [2024-10-25 15:24:48.482607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.661 ms 00:20:06.354 [2024-10-25 15:24:48.482618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.354 [2024-10-25 15:24:48.482806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.354 [2024-10-25 15:24:48.482820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:06.354 [2024-10-25 15:24:48.482837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:20:06.354 [2024-10-25 15:24:48.482852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.354 [2024-10-25 15:24:48.519091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.354 [2024-10-25 15:24:48.519132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:06.354 [2024-10-25 15:24:48.519149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.241 ms 00:20:06.354 [2024-10-25 15:24:48.519161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.354 [2024-10-25 15:24:48.554270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.354 [2024-10-25 15:24:48.554306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:06.354 [2024-10-25 15:24:48.554324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.106 ms 00:20:06.354 [2024-10-25 15:24:48.554334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.354 [2024-10-25 15:24:48.554995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.354 [2024-10-25 15:24:48.555016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:06.354 [2024-10-25 15:24:48.555030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:20:06.354 [2024-10-25 15:24:48.555041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.354 [2024-10-25 15:24:48.659038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.354 [2024-10-25 15:24:48.659096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:06.355 [2024-10-25 15:24:48.659120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.104 ms 00:20:06.355 [2024-10-25 15:24:48.659132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.355 [2024-10-25 15:24:48.697141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.355 [2024-10-25 15:24:48.697209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:06.355 [2024-10-25 15:24:48.697233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.957 ms 00:20:06.355 [2024-10-25 15:24:48.697245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.355 [2024-10-25 15:24:48.734060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.355 [2024-10-25 15:24:48.734104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:06.355 [2024-10-25 15:24:48.734121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.816 ms 00:20:06.355 [2024-10-25 15:24:48.734131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.355 [2024-10-25 15:24:48.771479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.355 [2024-10-25 15:24:48.771647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:06.355 [2024-10-25 15:24:48.771675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.360 ms 00:20:06.355 [2024-10-25 15:24:48.771685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.355 [2024-10-25 15:24:48.771750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.355 [2024-10-25 15:24:48.771762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:06.355 [2024-10-25 15:24:48.771780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:06.355 [2024-10-25 15:24:48.771790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.355 [2024-10-25 15:24:48.771897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.355 [2024-10-25 15:24:48.771909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:06.355 [2024-10-25 15:24:48.771923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:06.355 [2024-10-25 15:24:48.771933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.355 [2024-10-25 15:24:48.772970] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4132.030 ms, result 0 00:20:06.355 { 00:20:06.355 "name": "ftl0", 00:20:06.355 "uuid": "52199eca-5cab-4185-b25a-1e7503e93f9a" 00:20:06.355 } 00:20:06.355 15:24:48 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:06.355 15:24:48 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:06.355 15:24:49 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:06.355 15:24:49 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:06.613 [2024-10-25 15:24:49.211674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.613 [2024-10-25 15:24:49.211735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:06.613 [2024-10-25 15:24:49.211753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:06.613 [2024-10-25 15:24:49.211778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.613 [2024-10-25 15:24:49.211806] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:06.613 [2024-10-25 15:24:49.216077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.613 [2024-10-25 15:24:49.216112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:06.613 [2024-10-25 15:24:49.216131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.253 ms 00:20:06.613 [2024-10-25 15:24:49.216142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.613 [2024-10-25 15:24:49.216403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.613 [2024-10-25 15:24:49.216418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:06.613 [2024-10-25 15:24:49.216431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:20:06.613 [2024-10-25 15:24:49.216445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.613 [2024-10-25 15:24:49.218956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.613 [2024-10-25 15:24:49.218979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:06.613 [2024-10-25 15:24:49.218993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.495 ms 00:20:06.613 [2024-10-25 15:24:49.219004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.613 [2024-10-25 15:24:49.223992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.613 [2024-10-25 15:24:49.224028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:06.613 [2024-10-25 15:24:49.224044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.974 ms 00:20:06.613 [2024-10-25 15:24:49.224055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.613 [2024-10-25 15:24:49.260102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.613 [2024-10-25 15:24:49.260146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:06.613 [2024-10-25 15:24:49.260164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.011 ms 00:20:06.613 [2024-10-25 15:24:49.260175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.613 [2024-10-25 15:24:49.281829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.613 [2024-10-25 15:24:49.281876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:06.613 [2024-10-25 15:24:49.281893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.634 ms 00:20:06.613 [2024-10-25 15:24:49.281904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.613 [2024-10-25 15:24:49.282057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.613 [2024-10-25 15:24:49.282071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:06.613 [2024-10-25 15:24:49.282085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:20:06.613 [2024-10-25 15:24:49.282095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.613 [2024-10-25 15:24:49.318324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.613 [2024-10-25 15:24:49.318366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:06.613 [2024-10-25 15:24:49.318382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.265 ms 00:20:06.613 [2024-10-25 15:24:49.318393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.872 [2024-10-25 15:24:49.354740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.872 [2024-10-25 15:24:49.354797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:06.872 [2024-10-25 15:24:49.354816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.356 ms 00:20:06.872 [2024-10-25 15:24:49.354827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.872 [2024-10-25 15:24:49.390206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.872 [2024-10-25 15:24:49.390248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:06.872 [2024-10-25 15:24:49.390265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.231 ms 00:20:06.872 [2024-10-25 15:24:49.390275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.872 [2024-10-25 15:24:49.425272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.872 [2024-10-25 15:24:49.425317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:06.872 [2024-10-25 15:24:49.425334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.946 ms 00:20:06.872 [2024-10-25 15:24:49.425344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.872 [2024-10-25 15:24:49.425390] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:06.872 [2024-10-25 15:24:49.425407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:06.872 [2024-10-25 15:24:49.425423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:06.872 [2024-10-25 15:24:49.425435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:06.872 [2024-10-25 15:24:49.425449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.425989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:06.873 [2024-10-25 15:24:49.426647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:06.874 [2024-10-25 15:24:49.426659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:06.874 [2024-10-25 15:24:49.426673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:06.874 [2024-10-25 15:24:49.426691] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:06.874 [2024-10-25 15:24:49.426704] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 52199eca-5cab-4185-b25a-1e7503e93f9a 00:20:06.874 [2024-10-25 15:24:49.426715] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:06.874 [2024-10-25 15:24:49.426733] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:06.874 [2024-10-25 15:24:49.426744] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:06.874 [2024-10-25 15:24:49.426757] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:06.874 [2024-10-25 15:24:49.426770] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:06.874 [2024-10-25 15:24:49.426783] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:06.874 [2024-10-25 15:24:49.426792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:06.874 [2024-10-25 15:24:49.426804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:06.874 [2024-10-25 15:24:49.426813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:06.874 [2024-10-25 15:24:49.426825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.874 [2024-10-25 15:24:49.426836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:06.874 [2024-10-25 15:24:49.426850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.440 ms 00:20:06.874 [2024-10-25 15:24:49.426860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.874 [2024-10-25 15:24:49.446812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.874 [2024-10-25 15:24:49.446852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:06.874 [2024-10-25 15:24:49.446868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.928 ms 00:20:06.874 [2024-10-25 15:24:49.446878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.874 [2024-10-25 15:24:49.447478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.874 [2024-10-25 15:24:49.447492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:06.874 [2024-10-25 15:24:49.447505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:20:06.874 [2024-10-25 15:24:49.447516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.874 [2024-10-25 15:24:49.514200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.874 [2024-10-25 15:24:49.514260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:06.874 [2024-10-25 15:24:49.514278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.874 [2024-10-25 15:24:49.514290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.874 [2024-10-25 15:24:49.514368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.874 [2024-10-25 15:24:49.514380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:06.874 [2024-10-25 15:24:49.514394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.874 [2024-10-25 15:24:49.514406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.874 [2024-10-25 15:24:49.514516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.874 [2024-10-25 15:24:49.514530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:06.874 [2024-10-25 15:24:49.514544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.874 [2024-10-25 15:24:49.514554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.874 [2024-10-25 15:24:49.514579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.874 [2024-10-25 15:24:49.514590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.874 [2024-10-25 15:24:49.514602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.874 [2024-10-25 15:24:49.514613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.132 [2024-10-25 15:24:49.639709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.132 [2024-10-25 15:24:49.639774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:07.132 [2024-10-25 15:24:49.639792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.132 [2024-10-25 15:24:49.639803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-10-25 15:24:49.742093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.133 [2024-10-25 15:24:49.742163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:07.133 [2024-10-25 15:24:49.742196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.133 [2024-10-25 15:24:49.742208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-10-25 15:24:49.742326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.133 [2024-10-25 15:24:49.742343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:07.133 [2024-10-25 15:24:49.742356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.133 [2024-10-25 15:24:49.742366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-10-25 15:24:49.742430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.133 [2024-10-25 15:24:49.742442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:07.133 [2024-10-25 15:24:49.742455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.133 [2024-10-25 15:24:49.742465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-10-25 15:24:49.742595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.133 [2024-10-25 15:24:49.742609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:07.133 [2024-10-25 15:24:49.742626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.133 [2024-10-25 15:24:49.742636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-10-25 15:24:49.742676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.133 [2024-10-25 15:24:49.742689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:07.133 [2024-10-25 15:24:49.742702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.133 [2024-10-25 15:24:49.742712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-10-25 15:24:49.742752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.133 [2024-10-25 15:24:49.742763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:07.133 [2024-10-25 15:24:49.742778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.133 [2024-10-25 15:24:49.742788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-10-25 15:24:49.742837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.133 [2024-10-25 15:24:49.742849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:07.133 [2024-10-25 15:24:49.742862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.133 [2024-10-25 15:24:49.742872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.133 [2024-10-25 15:24:49.743017] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 532.173 ms, result 0 00:20:07.133 true 00:20:07.133 15:24:49 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 76200 00:20:07.133 15:24:49 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 76200 ']' 00:20:07.133 15:24:49 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 76200 00:20:07.133 15:24:49 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:20:07.133 15:24:49 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:07.133 15:24:49 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76200 00:20:07.133 15:24:49 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:07.133 15:24:49 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:07.133 killing process with pid 76200 00:20:07.133 15:24:49 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76200' 00:20:07.133 15:24:49 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 76200 00:20:07.133 15:24:49 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 76200 00:20:12.405 15:24:54 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:16.599 262144+0 records in 00:20:16.599 262144+0 records out 00:20:16.599 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.39349 s, 244 MB/s 00:20:16.599 15:24:58 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:17.978 15:25:00 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:18.238 [2024-10-25 15:25:00.728002] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:20:18.238 [2024-10-25 15:25:00.728135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76447 ] 00:20:18.238 [2024-10-25 15:25:00.918570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.497 [2024-10-25 15:25:01.033899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.757 [2024-10-25 15:25:01.398471] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:18.757 [2024-10-25 15:25:01.398532] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:19.018 [2024-10-25 15:25:01.567817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.018 [2024-10-25 15:25:01.567864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:19.018 [2024-10-25 15:25:01.567885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:19.018 [2024-10-25 15:25:01.567896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-10-25 15:25:01.567948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.018 [2024-10-25 15:25:01.567960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:19.018 [2024-10-25 15:25:01.567977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:19.018 [2024-10-25 15:25:01.567987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-10-25 15:25:01.568008] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:19.018 [2024-10-25 15:25:01.568974] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:19.018 [2024-10-25 15:25:01.569007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.018 [2024-10-25 15:25:01.569019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:19.018 [2024-10-25 15:25:01.569029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:20:19.018 [2024-10-25 15:25:01.569039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-10-25 15:25:01.570552] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:19.018 [2024-10-25 15:25:01.589545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.018 [2024-10-25 15:25:01.589581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:19.018 [2024-10-25 15:25:01.589595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.024 ms 00:20:19.018 [2024-10-25 15:25:01.589605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-10-25 15:25:01.589676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.018 [2024-10-25 15:25:01.589695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:19.018 [2024-10-25 15:25:01.589706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:19.018 [2024-10-25 15:25:01.589716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-10-25 15:25:01.596434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.018 [2024-10-25 15:25:01.596459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:19.018 [2024-10-25 15:25:01.596471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.654 ms 00:20:19.018 [2024-10-25 15:25:01.596480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-10-25 15:25:01.596590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.018 [2024-10-25 15:25:01.596604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:19.018 [2024-10-25 15:25:01.596615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:19.018 [2024-10-25 15:25:01.596625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-10-25 15:25:01.596663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.018 [2024-10-25 15:25:01.596674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:19.018 [2024-10-25 15:25:01.596685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:19.018 [2024-10-25 15:25:01.596695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-10-25 15:25:01.596721] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:19.018 [2024-10-25 15:25:01.601526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.018 [2024-10-25 15:25:01.601554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:19.018 [2024-10-25 15:25:01.601565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.819 ms 00:20:19.018 [2024-10-25 15:25:01.601582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-10-25 15:25:01.601611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.018 [2024-10-25 15:25:01.601623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:19.018 [2024-10-25 15:25:01.601633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:19.018 [2024-10-25 15:25:01.601642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-10-25 15:25:01.601692] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:19.018 [2024-10-25 15:25:01.601717] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:19.018 [2024-10-25 15:25:01.601751] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:19.018 [2024-10-25 15:25:01.601774] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:19.018 [2024-10-25 15:25:01.601861] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:19.018 [2024-10-25 15:25:01.601874] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:19.018 [2024-10-25 15:25:01.601887] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:19.018 [2024-10-25 15:25:01.601900] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:19.018 [2024-10-25 15:25:01.601912] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:19.018 [2024-10-25 15:25:01.601922] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:19.018 [2024-10-25 15:25:01.601932] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:19.018 [2024-10-25 15:25:01.601941] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:19.018 [2024-10-25 15:25:01.601951] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:19.018 [2024-10-25 15:25:01.601967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.018 [2024-10-25 15:25:01.601977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:19.018 [2024-10-25 15:25:01.601987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:20:19.018 [2024-10-25 15:25:01.601996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-10-25 15:25:01.602065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.018 [2024-10-25 15:25:01.602076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:19.018 [2024-10-25 15:25:01.602085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:19.018 [2024-10-25 15:25:01.602095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.018 [2024-10-25 15:25:01.602201] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:19.018 [2024-10-25 15:25:01.602222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:19.018 [2024-10-25 15:25:01.602233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.018 [2024-10-25 15:25:01.602243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.018 [2024-10-25 15:25:01.602253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:19.018 [2024-10-25 15:25:01.602262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:19.018 [2024-10-25 15:25:01.602271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:19.018 [2024-10-25 15:25:01.602281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:19.018 [2024-10-25 15:25:01.602291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:19.018 [2024-10-25 15:25:01.602300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.018 [2024-10-25 15:25:01.602310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:19.018 [2024-10-25 15:25:01.602320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:19.018 [2024-10-25 15:25:01.602328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.018 [2024-10-25 15:25:01.602338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:19.018 [2024-10-25 15:25:01.602347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:19.019 [2024-10-25 15:25:01.602368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.019 [2024-10-25 15:25:01.602377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:19.019 [2024-10-25 15:25:01.602387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:19.019 [2024-10-25 15:25:01.602396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.019 [2024-10-25 15:25:01.602405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:19.019 [2024-10-25 15:25:01.602414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:19.019 [2024-10-25 15:25:01.602423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.019 [2024-10-25 15:25:01.602432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:19.019 [2024-10-25 15:25:01.602441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:19.019 [2024-10-25 15:25:01.602450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.019 [2024-10-25 15:25:01.602458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:19.019 [2024-10-25 15:25:01.602467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:19.019 [2024-10-25 15:25:01.602476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.019 [2024-10-25 15:25:01.602485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:19.019 [2024-10-25 15:25:01.602494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:19.019 [2024-10-25 15:25:01.602503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.019 [2024-10-25 15:25:01.602512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:19.019 [2024-10-25 15:25:01.602521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:19.019 [2024-10-25 15:25:01.602530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.019 [2024-10-25 15:25:01.602539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:19.019 [2024-10-25 15:25:01.602547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:19.019 [2024-10-25 15:25:01.602556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.019 [2024-10-25 15:25:01.602565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:19.019 [2024-10-25 15:25:01.602574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:19.019 [2024-10-25 15:25:01.602582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.019 [2024-10-25 15:25:01.602591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:19.019 [2024-10-25 15:25:01.602600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:19.019 [2024-10-25 15:25:01.602610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.019 [2024-10-25 15:25:01.602619] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:19.019 [2024-10-25 15:25:01.602629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:19.019 [2024-10-25 15:25:01.602638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.019 [2024-10-25 15:25:01.602648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.019 [2024-10-25 15:25:01.602657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:19.019 [2024-10-25 15:25:01.602666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:19.019 [2024-10-25 15:25:01.602675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:19.019 [2024-10-25 15:25:01.602684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:19.019 [2024-10-25 15:25:01.602693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:19.019 [2024-10-25 15:25:01.602702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:19.019 [2024-10-25 15:25:01.602712] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:19.019 [2024-10-25 15:25:01.602724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.019 [2024-10-25 15:25:01.602735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:19.019 [2024-10-25 15:25:01.602745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:19.019 [2024-10-25 15:25:01.602755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:19.019 [2024-10-25 15:25:01.602765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:19.019 [2024-10-25 15:25:01.602776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:19.019 [2024-10-25 15:25:01.602786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:19.019 [2024-10-25 15:25:01.602796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:19.019 [2024-10-25 15:25:01.602806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:19.019 [2024-10-25 15:25:01.602816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:19.019 [2024-10-25 15:25:01.602826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:19.019 [2024-10-25 15:25:01.602836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:19.019 [2024-10-25 15:25:01.602845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:19.019 [2024-10-25 15:25:01.602856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:19.019 [2024-10-25 15:25:01.602866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:19.019 [2024-10-25 15:25:01.602876] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:19.019 [2024-10-25 15:25:01.602886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.019 [2024-10-25 15:25:01.602904] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:19.019 [2024-10-25 15:25:01.602914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:19.019 [2024-10-25 15:25:01.602932] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:19.019 [2024-10-25 15:25:01.602959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:19.019 [2024-10-25 15:25:01.602970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.019 [2024-10-25 15:25:01.602980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:19.019 [2024-10-25 15:25:01.602990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.835 ms 00:20:19.019 [2024-10-25 15:25:01.603000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.019 [2024-10-25 15:25:01.643039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.019 [2024-10-25 15:25:01.643075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:19.019 [2024-10-25 15:25:01.643088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.054 ms 00:20:19.019 [2024-10-25 15:25:01.643099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.019 [2024-10-25 15:25:01.643192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.019 [2024-10-25 15:25:01.643213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:19.019 [2024-10-25 15:25:01.643224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:19.019 [2024-10-25 15:25:01.643233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.019 [2024-10-25 15:25:01.705803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.019 [2024-10-25 15:25:01.705852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:19.019 [2024-10-25 15:25:01.705867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.606 ms 00:20:19.019 [2024-10-25 15:25:01.705878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.019 [2024-10-25 15:25:01.705928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.019 [2024-10-25 15:25:01.705939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:19.019 [2024-10-25 15:25:01.705950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:19.019 [2024-10-25 15:25:01.705968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.019 [2024-10-25 15:25:01.706487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.019 [2024-10-25 15:25:01.706506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:19.019 [2024-10-25 15:25:01.706518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:20:19.019 [2024-10-25 15:25:01.706528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.019 [2024-10-25 15:25:01.706653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.019 [2024-10-25 15:25:01.706667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:19.019 [2024-10-25 15:25:01.706678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:20:19.019 [2024-10-25 15:25:01.706688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.019 [2024-10-25 15:25:01.727310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.019 [2024-10-25 15:25:01.727345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:19.019 [2024-10-25 15:25:01.727360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.625 ms 00:20:19.019 [2024-10-25 15:25:01.727376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.279 [2024-10-25 15:25:01.746796] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:19.279 [2024-10-25 15:25:01.746834] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:19.279 [2024-10-25 15:25:01.746865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.279 [2024-10-25 15:25:01.746876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:19.279 [2024-10-25 15:25:01.746887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.401 ms 00:20:19.279 [2024-10-25 15:25:01.746897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.279 [2024-10-25 15:25:01.776636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.279 [2024-10-25 15:25:01.776673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:19.279 [2024-10-25 15:25:01.776692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.739 ms 00:20:19.279 [2024-10-25 15:25:01.776702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.279 [2024-10-25 15:25:01.794541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.279 [2024-10-25 15:25:01.794584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:19.279 [2024-10-25 15:25:01.794597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.822 ms 00:20:19.279 [2024-10-25 15:25:01.794607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.279 [2024-10-25 15:25:01.813617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.279 [2024-10-25 15:25:01.813660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:19.279 [2024-10-25 15:25:01.813674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.996 ms 00:20:19.279 [2024-10-25 15:25:01.813684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.279 [2024-10-25 15:25:01.814540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.279 [2024-10-25 15:25:01.814563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:19.279 [2024-10-25 15:25:01.814575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:20:19.279 [2024-10-25 15:25:01.814586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.279 [2024-10-25 15:25:01.901503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.279 [2024-10-25 15:25:01.901564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:19.279 [2024-10-25 15:25:01.901582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.035 ms 00:20:19.279 [2024-10-25 15:25:01.901593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.279 [2024-10-25 15:25:01.913022] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:19.279 [2024-10-25 15:25:01.916250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.279 [2024-10-25 15:25:01.916279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:19.279 [2024-10-25 15:25:01.916293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.592 ms 00:20:19.279 [2024-10-25 15:25:01.916302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.279 [2024-10-25 15:25:01.916420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.279 [2024-10-25 15:25:01.916434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:19.279 [2024-10-25 15:25:01.916445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:19.279 [2024-10-25 15:25:01.916455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.279 [2024-10-25 15:25:01.916542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.279 [2024-10-25 15:25:01.916563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:19.279 [2024-10-25 15:25:01.916574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:19.279 [2024-10-25 15:25:01.916584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.279 [2024-10-25 15:25:01.916609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.279 [2024-10-25 15:25:01.916620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:19.279 [2024-10-25 15:25:01.916630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:19.279 [2024-10-25 15:25:01.916640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.279 [2024-10-25 15:25:01.916698] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:19.279 [2024-10-25 15:25:01.916717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.279 [2024-10-25 15:25:01.916728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:19.280 [2024-10-25 15:25:01.916745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:19.280 [2024-10-25 15:25:01.916755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.280 [2024-10-25 15:25:01.953340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.280 [2024-10-25 15:25:01.953376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:19.280 [2024-10-25 15:25:01.953390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.620 ms 00:20:19.280 [2024-10-25 15:25:01.953401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.280 [2024-10-25 15:25:01.953487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.280 [2024-10-25 15:25:01.953500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:19.280 [2024-10-25 15:25:01.953511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:19.280 [2024-10-25 15:25:01.953520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.280 [2024-10-25 15:25:01.954636] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 386.964 ms, result 0 00:20:20.659  [2024-10-25T15:25:04.324Z] Copying: 26/1024 [MB] (26 MBps) [2024-10-25T15:25:05.261Z] Copying: 53/1024 [MB] (26 MBps) [2024-10-25T15:25:06.244Z] Copying: 77/1024 [MB] (23 MBps) [2024-10-25T15:25:07.181Z] Copying: 107/1024 [MB] (29 MBps) [2024-10-25T15:25:08.118Z] Copying: 134/1024 [MB] (27 MBps) [2024-10-25T15:25:09.054Z] Copying: 160/1024 [MB] (26 MBps) [2024-10-25T15:25:10.001Z] Copying: 188/1024 [MB] (28 MBps) [2024-10-25T15:25:11.397Z] Copying: 215/1024 [MB] (26 MBps) [2024-10-25T15:25:11.960Z] Copying: 242/1024 [MB] (27 MBps) [2024-10-25T15:25:13.327Z] Copying: 270/1024 [MB] (27 MBps) [2024-10-25T15:25:14.258Z] Copying: 296/1024 [MB] (26 MBps) [2024-10-25T15:25:15.190Z] Copying: 324/1024 [MB] (27 MBps) [2024-10-25T15:25:16.124Z] Copying: 351/1024 [MB] (27 MBps) [2024-10-25T15:25:17.058Z] Copying: 378/1024 [MB] (26 MBps) [2024-10-25T15:25:17.992Z] Copying: 404/1024 [MB] (26 MBps) [2024-10-25T15:25:18.959Z] Copying: 430/1024 [MB] (26 MBps) [2024-10-25T15:25:20.334Z] Copying: 457/1024 [MB] (26 MBps) [2024-10-25T15:25:21.271Z] Copying: 483/1024 [MB] (26 MBps) [2024-10-25T15:25:22.208Z] Copying: 509/1024 [MB] (26 MBps) [2024-10-25T15:25:23.144Z] Copying: 536/1024 [MB] (26 MBps) [2024-10-25T15:25:24.081Z] Copying: 562/1024 [MB] (26 MBps) [2024-10-25T15:25:25.019Z] Copying: 589/1024 [MB] (26 MBps) [2024-10-25T15:25:25.953Z] Copying: 615/1024 [MB] (26 MBps) [2024-10-25T15:25:27.327Z] Copying: 640/1024 [MB] (25 MBps) [2024-10-25T15:25:28.261Z] Copying: 666/1024 [MB] (25 MBps) [2024-10-25T15:25:29.252Z] Copying: 693/1024 [MB] (27 MBps) [2024-10-25T15:25:30.190Z] Copying: 720/1024 [MB] (26 MBps) [2024-10-25T15:25:31.159Z] Copying: 748/1024 [MB] (28 MBps) [2024-10-25T15:25:32.098Z] Copying: 776/1024 [MB] (27 MBps) [2024-10-25T15:25:33.035Z] Copying: 804/1024 [MB] (27 MBps) [2024-10-25T15:25:34.019Z] Copying: 829/1024 [MB] (25 MBps) [2024-10-25T15:25:34.954Z] Copying: 856/1024 [MB] (27 MBps) [2024-10-25T15:25:36.332Z] Copying: 883/1024 [MB] (26 MBps) [2024-10-25T15:25:37.268Z] Copying: 911/1024 [MB] (27 MBps) [2024-10-25T15:25:38.204Z] Copying: 939/1024 [MB] (27 MBps) [2024-10-25T15:25:39.139Z] Copying: 966/1024 [MB] (27 MBps) [2024-10-25T15:25:40.077Z] Copying: 994/1024 [MB] (27 MBps) [2024-10-25T15:25:40.077Z] Copying: 1021/1024 [MB] (27 MBps) [2024-10-25T15:25:40.077Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-10-25 15:25:39.987879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.349 [2024-10-25 15:25:39.987925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:57.349 [2024-10-25 15:25:39.987942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:57.349 [2024-10-25 15:25:39.987952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.349 [2024-10-25 15:25:39.987972] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:57.349 [2024-10-25 15:25:39.992244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.349 [2024-10-25 15:25:39.992279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:57.349 [2024-10-25 15:25:39.992292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.262 ms 00:20:57.349 [2024-10-25 15:25:39.992302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.349 [2024-10-25 15:25:39.994060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.349 [2024-10-25 15:25:39.994100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:57.349 [2024-10-25 15:25:39.994112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.730 ms 00:20:57.349 [2024-10-25 15:25:39.994123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.349 [2024-10-25 15:25:40.011811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.349 [2024-10-25 15:25:40.011855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:57.349 [2024-10-25 15:25:40.011868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.699 ms 00:20:57.349 [2024-10-25 15:25:40.011878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.349 [2024-10-25 15:25:40.016856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.349 [2024-10-25 15:25:40.016899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:57.349 [2024-10-25 15:25:40.016911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.952 ms 00:20:57.349 [2024-10-25 15:25:40.016921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.349 [2024-10-25 15:25:40.054003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.349 [2024-10-25 15:25:40.054060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:57.349 [2024-10-25 15:25:40.054076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.086 ms 00:20:57.349 [2024-10-25 15:25:40.054087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.349 [2024-10-25 15:25:40.074957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.349 [2024-10-25 15:25:40.074997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:57.349 [2024-10-25 15:25:40.075011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.864 ms 00:20:57.349 [2024-10-25 15:25:40.075021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.349 [2024-10-25 15:25:40.075154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.349 [2024-10-25 15:25:40.075168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:57.349 [2024-10-25 15:25:40.075188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:57.349 [2024-10-25 15:25:40.075204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.609 [2024-10-25 15:25:40.111357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.609 [2024-10-25 15:25:40.111398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:57.609 [2024-10-25 15:25:40.111411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.194 ms 00:20:57.609 [2024-10-25 15:25:40.111421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.609 [2024-10-25 15:25:40.146842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.609 [2024-10-25 15:25:40.146879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:57.609 [2024-10-25 15:25:40.146904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.441 ms 00:20:57.609 [2024-10-25 15:25:40.146914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.609 [2024-10-25 15:25:40.182232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.609 [2024-10-25 15:25:40.182272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:57.609 [2024-10-25 15:25:40.182286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.331 ms 00:20:57.609 [2024-10-25 15:25:40.182296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.609 [2024-10-25 15:25:40.217472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.609 [2024-10-25 15:25:40.217514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:57.609 [2024-10-25 15:25:40.217527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.153 ms 00:20:57.609 [2024-10-25 15:25:40.217537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.609 [2024-10-25 15:25:40.217573] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:57.609 [2024-10-25 15:25:40.217589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.217999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.218010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.218020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.218030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.218040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.218050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.218060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.218070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.218081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.218091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.218101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.218111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.218121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.218132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.218142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:57.609 [2024-10-25 15:25:40.218152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:57.610 [2024-10-25 15:25:40.218650] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:57.610 [2024-10-25 15:25:40.218666] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 52199eca-5cab-4185-b25a-1e7503e93f9a 00:20:57.610 [2024-10-25 15:25:40.218677] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:57.610 [2024-10-25 15:25:40.218690] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:57.610 [2024-10-25 15:25:40.218700] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:57.610 [2024-10-25 15:25:40.218710] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:57.610 [2024-10-25 15:25:40.218720] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:57.610 [2024-10-25 15:25:40.218730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:57.610 [2024-10-25 15:25:40.218740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:57.610 [2024-10-25 15:25:40.218760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:57.610 [2024-10-25 15:25:40.218769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:57.610 [2024-10-25 15:25:40.218778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.610 [2024-10-25 15:25:40.218789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:57.610 [2024-10-25 15:25:40.218799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.208 ms 00:20:57.610 [2024-10-25 15:25:40.218809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.610 [2024-10-25 15:25:40.238007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.610 [2024-10-25 15:25:40.238046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:57.610 [2024-10-25 15:25:40.238058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.196 ms 00:20:57.610 [2024-10-25 15:25:40.238069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.610 [2024-10-25 15:25:40.238657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.610 [2024-10-25 15:25:40.238679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:57.610 [2024-10-25 15:25:40.238690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:20:57.610 [2024-10-25 15:25:40.238700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.610 [2024-10-25 15:25:40.288930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:57.610 [2024-10-25 15:25:40.288969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:57.610 [2024-10-25 15:25:40.288982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:57.610 [2024-10-25 15:25:40.288992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.610 [2024-10-25 15:25:40.289046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:57.610 [2024-10-25 15:25:40.289058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:57.610 [2024-10-25 15:25:40.289068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:57.610 [2024-10-25 15:25:40.289078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.610 [2024-10-25 15:25:40.289165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:57.610 [2024-10-25 15:25:40.289190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:57.610 [2024-10-25 15:25:40.289201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:57.610 [2024-10-25 15:25:40.289211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.610 [2024-10-25 15:25:40.289228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:57.610 [2024-10-25 15:25:40.289238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:57.610 [2024-10-25 15:25:40.289248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:57.610 [2024-10-25 15:25:40.289257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.868 [2024-10-25 15:25:40.412089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:57.868 [2024-10-25 15:25:40.412151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:57.868 [2024-10-25 15:25:40.412166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:57.868 [2024-10-25 15:25:40.412183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.868 [2024-10-25 15:25:40.511200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:57.868 [2024-10-25 15:25:40.511249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:57.868 [2024-10-25 15:25:40.511264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:57.868 [2024-10-25 15:25:40.511275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.868 [2024-10-25 15:25:40.511361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:57.868 [2024-10-25 15:25:40.511379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:57.868 [2024-10-25 15:25:40.511390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:57.868 [2024-10-25 15:25:40.511400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.868 [2024-10-25 15:25:40.511438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:57.868 [2024-10-25 15:25:40.511449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:57.868 [2024-10-25 15:25:40.511460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:57.868 [2024-10-25 15:25:40.511469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.868 [2024-10-25 15:25:40.511713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:57.868 [2024-10-25 15:25:40.511726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:57.868 [2024-10-25 15:25:40.511742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:57.868 [2024-10-25 15:25:40.511751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.868 [2024-10-25 15:25:40.511787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:57.868 [2024-10-25 15:25:40.511799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:57.868 [2024-10-25 15:25:40.511809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:57.868 [2024-10-25 15:25:40.511819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.868 [2024-10-25 15:25:40.511855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:57.868 [2024-10-25 15:25:40.511866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:57.869 [2024-10-25 15:25:40.511880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:57.869 [2024-10-25 15:25:40.511889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.869 [2024-10-25 15:25:40.511930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:57.869 [2024-10-25 15:25:40.511942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:57.869 [2024-10-25 15:25:40.511953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:57.869 [2024-10-25 15:25:40.511963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.869 [2024-10-25 15:25:40.512077] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 525.017 ms, result 0 00:20:59.260 00:20:59.260 00:20:59.260 15:25:41 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:20:59.260 [2024-10-25 15:25:41.725645] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:20:59.260 [2024-10-25 15:25:41.725760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76859 ] 00:20:59.260 [2024-10-25 15:25:41.906273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.528 [2024-10-25 15:25:42.013392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.802 [2024-10-25 15:25:42.371738] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:59.802 [2024-10-25 15:25:42.371802] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:00.063 [2024-10-25 15:25:42.532030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.063 [2024-10-25 15:25:42.532087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:00.063 [2024-10-25 15:25:42.532106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:00.063 [2024-10-25 15:25:42.532117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.063 [2024-10-25 15:25:42.532164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.063 [2024-10-25 15:25:42.532189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:00.063 [2024-10-25 15:25:42.532204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:00.063 [2024-10-25 15:25:42.532214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.063 [2024-10-25 15:25:42.532236] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:00.063 [2024-10-25 15:25:42.533122] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:00.063 [2024-10-25 15:25:42.533153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.063 [2024-10-25 15:25:42.533164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:00.063 [2024-10-25 15:25:42.533191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:21:00.063 [2024-10-25 15:25:42.533202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.063 [2024-10-25 15:25:42.534668] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:00.063 [2024-10-25 15:25:42.554060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.063 [2024-10-25 15:25:42.554116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:00.063 [2024-10-25 15:25:42.554131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.423 ms 00:21:00.063 [2024-10-25 15:25:42.554142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.063 [2024-10-25 15:25:42.554213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.063 [2024-10-25 15:25:42.554229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:00.063 [2024-10-25 15:25:42.554240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:00.063 [2024-10-25 15:25:42.554251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.063 [2024-10-25 15:25:42.560984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.063 [2024-10-25 15:25:42.561016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:00.063 [2024-10-25 15:25:42.561028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.674 ms 00:21:00.063 [2024-10-25 15:25:42.561039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.063 [2024-10-25 15:25:42.561119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.063 [2024-10-25 15:25:42.561133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:00.063 [2024-10-25 15:25:42.561144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:00.063 [2024-10-25 15:25:42.561154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.063 [2024-10-25 15:25:42.561201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.063 [2024-10-25 15:25:42.561214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:00.063 [2024-10-25 15:25:42.561225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:00.063 [2024-10-25 15:25:42.561235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.063 [2024-10-25 15:25:42.561258] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:00.063 [2024-10-25 15:25:42.565953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.063 [2024-10-25 15:25:42.565986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:00.063 [2024-10-25 15:25:42.565998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.707 ms 00:21:00.063 [2024-10-25 15:25:42.566011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.063 [2024-10-25 15:25:42.566040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.063 [2024-10-25 15:25:42.566051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:00.063 [2024-10-25 15:25:42.566062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:00.063 [2024-10-25 15:25:42.566072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.063 [2024-10-25 15:25:42.566124] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:00.063 [2024-10-25 15:25:42.566147] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:00.063 [2024-10-25 15:25:42.566193] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:00.063 [2024-10-25 15:25:42.566215] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:00.063 [2024-10-25 15:25:42.566304] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:00.063 [2024-10-25 15:25:42.566318] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:00.063 [2024-10-25 15:25:42.566330] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:00.063 [2024-10-25 15:25:42.566343] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:00.063 [2024-10-25 15:25:42.566355] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:00.063 [2024-10-25 15:25:42.566366] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:00.063 [2024-10-25 15:25:42.566376] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:00.063 [2024-10-25 15:25:42.566386] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:00.063 [2024-10-25 15:25:42.566396] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:00.063 [2024-10-25 15:25:42.566410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.063 [2024-10-25 15:25:42.566419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:00.063 [2024-10-25 15:25:42.566430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:21:00.063 [2024-10-25 15:25:42.566440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.063 [2024-10-25 15:25:42.566511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.063 [2024-10-25 15:25:42.566522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:00.063 [2024-10-25 15:25:42.566533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:00.063 [2024-10-25 15:25:42.566542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.063 [2024-10-25 15:25:42.566636] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:00.063 [2024-10-25 15:25:42.566658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:00.063 [2024-10-25 15:25:42.566669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:00.063 [2024-10-25 15:25:42.566680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.063 [2024-10-25 15:25:42.566690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:00.063 [2024-10-25 15:25:42.566699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:00.063 [2024-10-25 15:25:42.566709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:00.063 [2024-10-25 15:25:42.566718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:00.063 [2024-10-25 15:25:42.566728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:00.063 [2024-10-25 15:25:42.566737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:00.063 [2024-10-25 15:25:42.566747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:00.064 [2024-10-25 15:25:42.566757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:00.064 [2024-10-25 15:25:42.566766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:00.064 [2024-10-25 15:25:42.566775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:00.064 [2024-10-25 15:25:42.566784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:00.064 [2024-10-25 15:25:42.566802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.064 [2024-10-25 15:25:42.566812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:00.064 [2024-10-25 15:25:42.566821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:00.064 [2024-10-25 15:25:42.566830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.064 [2024-10-25 15:25:42.566840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:00.064 [2024-10-25 15:25:42.566849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:00.064 [2024-10-25 15:25:42.566858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.064 [2024-10-25 15:25:42.566867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:00.064 [2024-10-25 15:25:42.566877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:00.064 [2024-10-25 15:25:42.566885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.064 [2024-10-25 15:25:42.566894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:00.064 [2024-10-25 15:25:42.566903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:00.064 [2024-10-25 15:25:42.566912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.064 [2024-10-25 15:25:42.566921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:00.064 [2024-10-25 15:25:42.566941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:00.064 [2024-10-25 15:25:42.566950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.064 [2024-10-25 15:25:42.566960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:00.064 [2024-10-25 15:25:42.566969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:00.064 [2024-10-25 15:25:42.566978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:00.064 [2024-10-25 15:25:42.566987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:00.064 [2024-10-25 15:25:42.566996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:00.064 [2024-10-25 15:25:42.567005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:00.064 [2024-10-25 15:25:42.567014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:00.064 [2024-10-25 15:25:42.567024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:00.064 [2024-10-25 15:25:42.567033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.064 [2024-10-25 15:25:42.567042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:00.064 [2024-10-25 15:25:42.567052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:00.064 [2024-10-25 15:25:42.567063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.064 [2024-10-25 15:25:42.567072] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:00.064 [2024-10-25 15:25:42.567082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:00.064 [2024-10-25 15:25:42.567092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:00.064 [2024-10-25 15:25:42.567101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.064 [2024-10-25 15:25:42.567111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:00.064 [2024-10-25 15:25:42.567120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:00.064 [2024-10-25 15:25:42.567131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:00.064 [2024-10-25 15:25:42.567140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:00.064 [2024-10-25 15:25:42.567149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:00.064 [2024-10-25 15:25:42.567158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:00.064 [2024-10-25 15:25:42.567169] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:00.064 [2024-10-25 15:25:42.567191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:00.064 [2024-10-25 15:25:42.567202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:00.064 [2024-10-25 15:25:42.567213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:00.064 [2024-10-25 15:25:42.567223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:00.064 [2024-10-25 15:25:42.567234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:00.064 [2024-10-25 15:25:42.567244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:00.064 [2024-10-25 15:25:42.567255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:00.064 [2024-10-25 15:25:42.567265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:00.064 [2024-10-25 15:25:42.567275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:00.064 [2024-10-25 15:25:42.567285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:00.064 [2024-10-25 15:25:42.567294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:00.064 [2024-10-25 15:25:42.567304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:00.064 [2024-10-25 15:25:42.567314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:00.064 [2024-10-25 15:25:42.567324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:00.064 [2024-10-25 15:25:42.567334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:00.064 [2024-10-25 15:25:42.567344] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:00.064 [2024-10-25 15:25:42.567355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:00.064 [2024-10-25 15:25:42.567370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:00.064 [2024-10-25 15:25:42.567380] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:00.064 [2024-10-25 15:25:42.567390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:00.064 [2024-10-25 15:25:42.567401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:00.064 [2024-10-25 15:25:42.567412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.064 [2024-10-25 15:25:42.567422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:00.064 [2024-10-25 15:25:42.567432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:21:00.064 [2024-10-25 15:25:42.567442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.064 [2024-10-25 15:25:42.607329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.064 [2024-10-25 15:25:42.607369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:00.064 [2024-10-25 15:25:42.607383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.906 ms 00:21:00.064 [2024-10-25 15:25:42.607394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.064 [2024-10-25 15:25:42.607470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.064 [2024-10-25 15:25:42.607486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:00.064 [2024-10-25 15:25:42.607497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:00.064 [2024-10-25 15:25:42.607507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.064 [2024-10-25 15:25:42.665082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.064 [2024-10-25 15:25:42.665124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:00.064 [2024-10-25 15:25:42.665138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.613 ms 00:21:00.064 [2024-10-25 15:25:42.665149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.064 [2024-10-25 15:25:42.665192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.064 [2024-10-25 15:25:42.665203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:00.064 [2024-10-25 15:25:42.665214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:00.064 [2024-10-25 15:25:42.665229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.064 [2024-10-25 15:25:42.665707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.064 [2024-10-25 15:25:42.665729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:00.064 [2024-10-25 15:25:42.665741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:21:00.064 [2024-10-25 15:25:42.665751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.064 [2024-10-25 15:25:42.665866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.064 [2024-10-25 15:25:42.665879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:00.064 [2024-10-25 15:25:42.665891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:21:00.064 [2024-10-25 15:25:42.665901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.064 [2024-10-25 15:25:42.683738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.064 [2024-10-25 15:25:42.683776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:00.064 [2024-10-25 15:25:42.683790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.841 ms 00:21:00.064 [2024-10-25 15:25:42.683803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.064 [2024-10-25 15:25:42.701907] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:00.064 [2024-10-25 15:25:42.701943] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:00.064 [2024-10-25 15:25:42.701959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.064 [2024-10-25 15:25:42.701970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:00.064 [2024-10-25 15:25:42.701983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.080 ms 00:21:00.064 [2024-10-25 15:25:42.701993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.064 [2024-10-25 15:25:42.731899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.064 [2024-10-25 15:25:42.731951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:00.064 [2024-10-25 15:25:42.731965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.910 ms 00:21:00.064 [2024-10-25 15:25:42.731976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.065 [2024-10-25 15:25:42.750427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.065 [2024-10-25 15:25:42.750467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:00.065 [2024-10-25 15:25:42.750481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.435 ms 00:21:00.065 [2024-10-25 15:25:42.750490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.065 [2024-10-25 15:25:42.768277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.065 [2024-10-25 15:25:42.768317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:00.065 [2024-10-25 15:25:42.768331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.779 ms 00:21:00.065 [2024-10-25 15:25:42.768341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.065 [2024-10-25 15:25:42.769059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.065 [2024-10-25 15:25:42.769092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:00.065 [2024-10-25 15:25:42.769105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:21:00.065 [2024-10-25 15:25:42.769115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.323 [2024-10-25 15:25:42.853933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.323 [2024-10-25 15:25:42.853998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:00.323 [2024-10-25 15:25:42.854015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.931 ms 00:21:00.323 [2024-10-25 15:25:42.854037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.323 [2024-10-25 15:25:42.864809] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:00.323 [2024-10-25 15:25:42.867259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.323 [2024-10-25 15:25:42.867294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:00.323 [2024-10-25 15:25:42.867307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.195 ms 00:21:00.323 [2024-10-25 15:25:42.867318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.323 [2024-10-25 15:25:42.867403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.323 [2024-10-25 15:25:42.867417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:00.323 [2024-10-25 15:25:42.867428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:00.323 [2024-10-25 15:25:42.867438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.323 [2024-10-25 15:25:42.867513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.323 [2024-10-25 15:25:42.867525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:00.323 [2024-10-25 15:25:42.867536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:00.323 [2024-10-25 15:25:42.867546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.323 [2024-10-25 15:25:42.867567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.323 [2024-10-25 15:25:42.867577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:00.323 [2024-10-25 15:25:42.867587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:00.323 [2024-10-25 15:25:42.867597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.323 [2024-10-25 15:25:42.867638] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:00.323 [2024-10-25 15:25:42.867657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.323 [2024-10-25 15:25:42.867667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:00.323 [2024-10-25 15:25:42.867677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:00.323 [2024-10-25 15:25:42.867687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.323 [2024-10-25 15:25:42.903687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.323 [2024-10-25 15:25:42.903731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:00.323 [2024-10-25 15:25:42.903745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.039 ms 00:21:00.323 [2024-10-25 15:25:42.903755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.323 [2024-10-25 15:25:42.903843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.323 [2024-10-25 15:25:42.903856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:00.323 [2024-10-25 15:25:42.903867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:00.323 [2024-10-25 15:25:42.903877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.323 [2024-10-25 15:25:42.905060] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 373.133 ms, result 0 00:21:01.701  [2024-10-25T15:25:45.378Z] Copying: 27/1024 [MB] (27 MBps) [2024-10-25T15:25:46.315Z] Copying: 55/1024 [MB] (28 MBps) [2024-10-25T15:25:47.252Z] Copying: 83/1024 [MB] (27 MBps) [2024-10-25T15:25:48.299Z] Copying: 113/1024 [MB] (29 MBps) [2024-10-25T15:25:49.235Z] Copying: 143/1024 [MB] (30 MBps) [2024-10-25T15:25:50.171Z] Copying: 174/1024 [MB] (31 MBps) [2024-10-25T15:25:51.542Z] Copying: 208/1024 [MB] (33 MBps) [2024-10-25T15:25:52.476Z] Copying: 237/1024 [MB] (28 MBps) [2024-10-25T15:25:53.418Z] Copying: 265/1024 [MB] (28 MBps) [2024-10-25T15:25:54.357Z] Copying: 293/1024 [MB] (27 MBps) [2024-10-25T15:25:55.294Z] Copying: 320/1024 [MB] (27 MBps) [2024-10-25T15:25:56.232Z] Copying: 349/1024 [MB] (29 MBps) [2024-10-25T15:25:57.216Z] Copying: 377/1024 [MB] (27 MBps) [2024-10-25T15:25:58.153Z] Copying: 405/1024 [MB] (27 MBps) [2024-10-25T15:25:59.531Z] Copying: 433/1024 [MB] (28 MBps) [2024-10-25T15:26:00.100Z] Copying: 462/1024 [MB] (28 MBps) [2024-10-25T15:26:01.477Z] Copying: 489/1024 [MB] (27 MBps) [2024-10-25T15:26:02.413Z] Copying: 518/1024 [MB] (28 MBps) [2024-10-25T15:26:03.349Z] Copying: 545/1024 [MB] (27 MBps) [2024-10-25T15:26:04.285Z] Copying: 573/1024 [MB] (27 MBps) [2024-10-25T15:26:05.221Z] Copying: 602/1024 [MB] (28 MBps) [2024-10-25T15:26:06.158Z] Copying: 631/1024 [MB] (28 MBps) [2024-10-25T15:26:07.127Z] Copying: 660/1024 [MB] (28 MBps) [2024-10-25T15:26:08.502Z] Copying: 688/1024 [MB] (27 MBps) [2024-10-25T15:26:09.438Z] Copying: 716/1024 [MB] (28 MBps) [2024-10-25T15:26:10.374Z] Copying: 745/1024 [MB] (29 MBps) [2024-10-25T15:26:11.308Z] Copying: 773/1024 [MB] (28 MBps) [2024-10-25T15:26:12.246Z] Copying: 802/1024 [MB] (28 MBps) [2024-10-25T15:26:13.183Z] Copying: 831/1024 [MB] (29 MBps) [2024-10-25T15:26:14.119Z] Copying: 859/1024 [MB] (28 MBps) [2024-10-25T15:26:15.499Z] Copying: 887/1024 [MB] (27 MBps) [2024-10-25T15:26:16.094Z] Copying: 914/1024 [MB] (27 MBps) [2024-10-25T15:26:17.473Z] Copying: 941/1024 [MB] (26 MBps) [2024-10-25T15:26:18.411Z] Copying: 968/1024 [MB] (27 MBps) [2024-10-25T15:26:19.352Z] Copying: 995/1024 [MB] (26 MBps) [2024-10-25T15:26:19.352Z] Copying: 1022/1024 [MB] (27 MBps) [2024-10-25T15:26:20.291Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-10-25 15:26:20.015782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.563 [2024-10-25 15:26:20.015915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:37.563 [2024-10-25 15:26:20.015984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:37.563 [2024-10-25 15:26:20.016030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.563 [2024-10-25 15:26:20.016103] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:37.563 [2024-10-25 15:26:20.031251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.563 [2024-10-25 15:26:20.031323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:37.563 [2024-10-25 15:26:20.031352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.114 ms 00:21:37.563 [2024-10-25 15:26:20.031384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.563 [2024-10-25 15:26:20.031845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.563 [2024-10-25 15:26:20.031896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:37.563 [2024-10-25 15:26:20.031920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:21:37.563 [2024-10-25 15:26:20.031941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.563 [2024-10-25 15:26:20.037823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.563 [2024-10-25 15:26:20.037881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:37.563 [2024-10-25 15:26:20.037905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.859 ms 00:21:37.563 [2024-10-25 15:26:20.037927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.563 [2024-10-25 15:26:20.046387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.563 [2024-10-25 15:26:20.046443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:37.563 [2024-10-25 15:26:20.046462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.422 ms 00:21:37.563 [2024-10-25 15:26:20.046476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.563 [2024-10-25 15:26:20.083877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.563 [2024-10-25 15:26:20.083921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:37.563 [2024-10-25 15:26:20.083936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.371 ms 00:21:37.563 [2024-10-25 15:26:20.083946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.563 [2024-10-25 15:26:20.104908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.563 [2024-10-25 15:26:20.104948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:37.563 [2024-10-25 15:26:20.104962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.955 ms 00:21:37.563 [2024-10-25 15:26:20.104972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.563 [2024-10-25 15:26:20.105129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.563 [2024-10-25 15:26:20.105146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:37.563 [2024-10-25 15:26:20.105164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:21:37.563 [2024-10-25 15:26:20.105173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.563 [2024-10-25 15:26:20.141457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.563 [2024-10-25 15:26:20.141500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:37.563 [2024-10-25 15:26:20.141514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.314 ms 00:21:37.563 [2024-10-25 15:26:20.141524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.563 [2024-10-25 15:26:20.177444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.563 [2024-10-25 15:26:20.177498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:37.563 [2024-10-25 15:26:20.177527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.937 ms 00:21:37.563 [2024-10-25 15:26:20.177537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.563 [2024-10-25 15:26:20.213106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.563 [2024-10-25 15:26:20.213145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:37.564 [2024-10-25 15:26:20.213159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.587 ms 00:21:37.564 [2024-10-25 15:26:20.213168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.564 [2024-10-25 15:26:20.248335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.564 [2024-10-25 15:26:20.248373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:37.564 [2024-10-25 15:26:20.248403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.138 ms 00:21:37.564 [2024-10-25 15:26:20.248413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.564 [2024-10-25 15:26:20.248450] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:37.564 [2024-10-25 15:26:20.248467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.248991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:37.564 [2024-10-25 15:26:20.249316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:37.565 [2024-10-25 15:26:20.249542] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:37.565 [2024-10-25 15:26:20.249552] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 52199eca-5cab-4185-b25a-1e7503e93f9a 00:21:37.565 [2024-10-25 15:26:20.249567] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:37.565 [2024-10-25 15:26:20.249577] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:37.565 [2024-10-25 15:26:20.249596] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:37.565 [2024-10-25 15:26:20.249606] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:37.565 [2024-10-25 15:26:20.249616] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:37.565 [2024-10-25 15:26:20.249626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:37.565 [2024-10-25 15:26:20.249645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:37.565 [2024-10-25 15:26:20.249654] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:37.565 [2024-10-25 15:26:20.249663] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:37.565 [2024-10-25 15:26:20.249673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.565 [2024-10-25 15:26:20.249684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:37.565 [2024-10-25 15:26:20.249694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.226 ms 00:21:37.565 [2024-10-25 15:26:20.249704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.565 [2024-10-25 15:26:20.269237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.565 [2024-10-25 15:26:20.269273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:37.565 [2024-10-25 15:26:20.269302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.528 ms 00:21:37.565 [2024-10-25 15:26:20.269312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.565 [2024-10-25 15:26:20.269870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.565 [2024-10-25 15:26:20.269889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:37.565 [2024-10-25 15:26:20.269901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:21:37.565 [2024-10-25 15:26:20.269911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.838 [2024-10-25 15:26:20.321016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.838 [2024-10-25 15:26:20.321057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:37.838 [2024-10-25 15:26:20.321087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.838 [2024-10-25 15:26:20.321104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.838 [2024-10-25 15:26:20.321173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.838 [2024-10-25 15:26:20.321197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:37.838 [2024-10-25 15:26:20.321208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.838 [2024-10-25 15:26:20.321218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.838 [2024-10-25 15:26:20.321290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.838 [2024-10-25 15:26:20.321303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:37.838 [2024-10-25 15:26:20.321314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.838 [2024-10-25 15:26:20.321324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.838 [2024-10-25 15:26:20.321341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.838 [2024-10-25 15:26:20.321352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:37.838 [2024-10-25 15:26:20.321362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.838 [2024-10-25 15:26:20.321371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.839 [2024-10-25 15:26:20.445679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.839 [2024-10-25 15:26:20.445742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:37.839 [2024-10-25 15:26:20.445758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.839 [2024-10-25 15:26:20.445784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.839 [2024-10-25 15:26:20.545856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.839 [2024-10-25 15:26:20.545933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:37.839 [2024-10-25 15:26:20.545950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.839 [2024-10-25 15:26:20.545960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.839 [2024-10-25 15:26:20.546050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.839 [2024-10-25 15:26:20.546062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:37.839 [2024-10-25 15:26:20.546072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.839 [2024-10-25 15:26:20.546082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.839 [2024-10-25 15:26:20.546130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.839 [2024-10-25 15:26:20.546141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:37.839 [2024-10-25 15:26:20.546152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.839 [2024-10-25 15:26:20.546161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.839 [2024-10-25 15:26:20.546285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.839 [2024-10-25 15:26:20.546304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:37.839 [2024-10-25 15:26:20.546315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.839 [2024-10-25 15:26:20.546334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.839 [2024-10-25 15:26:20.546398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.839 [2024-10-25 15:26:20.546413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:37.839 [2024-10-25 15:26:20.546425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.839 [2024-10-25 15:26:20.546435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.839 [2024-10-25 15:26:20.546485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.839 [2024-10-25 15:26:20.546503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:37.839 [2024-10-25 15:26:20.546514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.839 [2024-10-25 15:26:20.546524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.839 [2024-10-25 15:26:20.546566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.839 [2024-10-25 15:26:20.546577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:37.839 [2024-10-25 15:26:20.546587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.839 [2024-10-25 15:26:20.546597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.839 [2024-10-25 15:26:20.546725] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 531.851 ms, result 0 00:21:39.233 00:21:39.233 00:21:39.233 15:26:21 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:40.615 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:40.615 15:26:23 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:40.873 [2024-10-25 15:26:23.382282] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:21:40.873 [2024-10-25 15:26:23.382580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77295 ] 00:21:40.873 [2024-10-25 15:26:23.563896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.132 [2024-10-25 15:26:23.675463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.392 [2024-10-25 15:26:24.031361] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:41.392 [2024-10-25 15:26:24.031435] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:41.652 [2024-10-25 15:26:24.192066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.652 [2024-10-25 15:26:24.192124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:41.652 [2024-10-25 15:26:24.192142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:41.652 [2024-10-25 15:26:24.192153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.652 [2024-10-25 15:26:24.192214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.652 [2024-10-25 15:26:24.192227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:41.652 [2024-10-25 15:26:24.192242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:41.652 [2024-10-25 15:26:24.192251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.652 [2024-10-25 15:26:24.192272] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:41.652 [2024-10-25 15:26:24.193185] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:41.652 [2024-10-25 15:26:24.193221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.652 [2024-10-25 15:26:24.193232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:41.652 [2024-10-25 15:26:24.193244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:21:41.652 [2024-10-25 15:26:24.193253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.652 [2024-10-25 15:26:24.194699] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:41.652 [2024-10-25 15:26:24.213577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.652 [2024-10-25 15:26:24.213613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:41.652 [2024-10-25 15:26:24.213628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.909 ms 00:21:41.652 [2024-10-25 15:26:24.213639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.652 [2024-10-25 15:26:24.213722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.652 [2024-10-25 15:26:24.213743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:41.652 [2024-10-25 15:26:24.213755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:41.652 [2024-10-25 15:26:24.213765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.652 [2024-10-25 15:26:24.220600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.652 [2024-10-25 15:26:24.220625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:41.652 [2024-10-25 15:26:24.220637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.771 ms 00:21:41.652 [2024-10-25 15:26:24.220646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.652 [2024-10-25 15:26:24.220729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.652 [2024-10-25 15:26:24.220743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:41.652 [2024-10-25 15:26:24.220753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:41.652 [2024-10-25 15:26:24.220763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.652 [2024-10-25 15:26:24.220806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.652 [2024-10-25 15:26:24.220818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:41.652 [2024-10-25 15:26:24.220828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:41.652 [2024-10-25 15:26:24.220838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.652 [2024-10-25 15:26:24.220862] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:41.652 [2024-10-25 15:26:24.225590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.652 [2024-10-25 15:26:24.225618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:41.652 [2024-10-25 15:26:24.225630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.741 ms 00:21:41.652 [2024-10-25 15:26:24.225644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.652 [2024-10-25 15:26:24.225675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.652 [2024-10-25 15:26:24.225685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:41.652 [2024-10-25 15:26:24.225696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:41.652 [2024-10-25 15:26:24.225706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.652 [2024-10-25 15:26:24.225760] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:41.652 [2024-10-25 15:26:24.225801] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:41.652 [2024-10-25 15:26:24.225842] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:41.652 [2024-10-25 15:26:24.225863] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:41.652 [2024-10-25 15:26:24.225963] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:41.652 [2024-10-25 15:26:24.225980] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:41.652 [2024-10-25 15:26:24.225993] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:41.652 [2024-10-25 15:26:24.226007] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:41.652 [2024-10-25 15:26:24.226020] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:41.652 [2024-10-25 15:26:24.226031] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:41.652 [2024-10-25 15:26:24.226042] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:41.652 [2024-10-25 15:26:24.226052] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:41.652 [2024-10-25 15:26:24.226062] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:41.652 [2024-10-25 15:26:24.226077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.652 [2024-10-25 15:26:24.226088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:41.652 [2024-10-25 15:26:24.226098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:21:41.652 [2024-10-25 15:26:24.226108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.652 [2024-10-25 15:26:24.226201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.652 [2024-10-25 15:26:24.226213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:41.652 [2024-10-25 15:26:24.226224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:21:41.652 [2024-10-25 15:26:24.226234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.652 [2024-10-25 15:26:24.226329] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:41.652 [2024-10-25 15:26:24.226346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:41.652 [2024-10-25 15:26:24.226357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:41.652 [2024-10-25 15:26:24.226367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.652 [2024-10-25 15:26:24.226378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:41.652 [2024-10-25 15:26:24.226387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:41.652 [2024-10-25 15:26:24.226396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:41.652 [2024-10-25 15:26:24.226406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:41.652 [2024-10-25 15:26:24.226415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:41.652 [2024-10-25 15:26:24.226424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:41.652 [2024-10-25 15:26:24.226435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:41.652 [2024-10-25 15:26:24.226444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:41.652 [2024-10-25 15:26:24.226454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:41.652 [2024-10-25 15:26:24.226463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:41.652 [2024-10-25 15:26:24.226472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:41.652 [2024-10-25 15:26:24.226490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.652 [2024-10-25 15:26:24.226499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:41.652 [2024-10-25 15:26:24.226508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:41.652 [2024-10-25 15:26:24.226517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.652 [2024-10-25 15:26:24.226527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:41.652 [2024-10-25 15:26:24.226536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:41.652 [2024-10-25 15:26:24.226545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:41.652 [2024-10-25 15:26:24.226554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:41.652 [2024-10-25 15:26:24.226563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:41.652 [2024-10-25 15:26:24.226572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:41.652 [2024-10-25 15:26:24.226581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:41.652 [2024-10-25 15:26:24.226590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:41.652 [2024-10-25 15:26:24.226599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:41.652 [2024-10-25 15:26:24.226607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:41.653 [2024-10-25 15:26:24.226616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:41.653 [2024-10-25 15:26:24.226625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:41.653 [2024-10-25 15:26:24.226634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:41.653 [2024-10-25 15:26:24.226643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:41.653 [2024-10-25 15:26:24.226651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:41.653 [2024-10-25 15:26:24.226661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:41.653 [2024-10-25 15:26:24.226669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:41.653 [2024-10-25 15:26:24.226678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:41.653 [2024-10-25 15:26:24.226687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:41.653 [2024-10-25 15:26:24.226696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:41.653 [2024-10-25 15:26:24.226704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.653 [2024-10-25 15:26:24.226713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:41.653 [2024-10-25 15:26:24.226722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:41.653 [2024-10-25 15:26:24.226732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.653 [2024-10-25 15:26:24.226741] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:41.653 [2024-10-25 15:26:24.226751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:41.653 [2024-10-25 15:26:24.226761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:41.653 [2024-10-25 15:26:24.226770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:41.653 [2024-10-25 15:26:24.226780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:41.653 [2024-10-25 15:26:24.226789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:41.653 [2024-10-25 15:26:24.226798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:41.653 [2024-10-25 15:26:24.226807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:41.653 [2024-10-25 15:26:24.226815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:41.653 [2024-10-25 15:26:24.226825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:41.653 [2024-10-25 15:26:24.226835] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:41.653 [2024-10-25 15:26:24.226847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:41.653 [2024-10-25 15:26:24.226858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:41.653 [2024-10-25 15:26:24.226868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:41.653 [2024-10-25 15:26:24.226878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:41.653 [2024-10-25 15:26:24.226888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:41.653 [2024-10-25 15:26:24.226897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:41.653 [2024-10-25 15:26:24.226907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:41.653 [2024-10-25 15:26:24.226917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:41.653 [2024-10-25 15:26:24.226928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:41.653 [2024-10-25 15:26:24.226938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:41.653 [2024-10-25 15:26:24.226958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:41.653 [2024-10-25 15:26:24.226968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:41.653 [2024-10-25 15:26:24.226978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:41.653 [2024-10-25 15:26:24.226988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:41.653 [2024-10-25 15:26:24.226999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:41.653 [2024-10-25 15:26:24.227009] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:41.653 [2024-10-25 15:26:24.227020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:41.653 [2024-10-25 15:26:24.227037] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:41.653 [2024-10-25 15:26:24.227048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:41.653 [2024-10-25 15:26:24.227059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:41.653 [2024-10-25 15:26:24.227070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:41.653 [2024-10-25 15:26:24.227081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.653 [2024-10-25 15:26:24.227091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:41.653 [2024-10-25 15:26:24.227101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:21:41.653 [2024-10-25 15:26:24.227111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.653 [2024-10-25 15:26:24.266281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.653 [2024-10-25 15:26:24.266314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:41.653 [2024-10-25 15:26:24.266328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.187 ms 00:21:41.653 [2024-10-25 15:26:24.266338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.653 [2024-10-25 15:26:24.266418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.653 [2024-10-25 15:26:24.266435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:41.653 [2024-10-25 15:26:24.266446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:41.653 [2024-10-25 15:26:24.266456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.653 [2024-10-25 15:26:24.334647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.653 [2024-10-25 15:26:24.334684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:41.653 [2024-10-25 15:26:24.334698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.241 ms 00:21:41.653 [2024-10-25 15:26:24.334709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.653 [2024-10-25 15:26:24.334756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.653 [2024-10-25 15:26:24.334767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:41.653 [2024-10-25 15:26:24.334778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:41.653 [2024-10-25 15:26:24.334792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.653 [2024-10-25 15:26:24.335298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.653 [2024-10-25 15:26:24.335314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:41.653 [2024-10-25 15:26:24.335325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:21:41.653 [2024-10-25 15:26:24.335334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.653 [2024-10-25 15:26:24.335460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.653 [2024-10-25 15:26:24.335474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:41.653 [2024-10-25 15:26:24.335484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:21:41.653 [2024-10-25 15:26:24.335494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.653 [2024-10-25 15:26:24.354486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.653 [2024-10-25 15:26:24.354522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:41.653 [2024-10-25 15:26:24.354551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.995 ms 00:21:41.653 [2024-10-25 15:26:24.354565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.653 [2024-10-25 15:26:24.374125] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:41.653 [2024-10-25 15:26:24.374161] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:41.653 [2024-10-25 15:26:24.374189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.653 [2024-10-25 15:26:24.374200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:41.653 [2024-10-25 15:26:24.374212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.535 ms 00:21:41.653 [2024-10-25 15:26:24.374222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.949 [2024-10-25 15:26:24.403604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.949 [2024-10-25 15:26:24.403648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:41.949 [2024-10-25 15:26:24.403662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.384 ms 00:21:41.949 [2024-10-25 15:26:24.403672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.949 [2024-10-25 15:26:24.422044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.949 [2024-10-25 15:26:24.422078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:41.949 [2024-10-25 15:26:24.422091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.351 ms 00:21:41.949 [2024-10-25 15:26:24.422101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.949 [2024-10-25 15:26:24.440070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.949 [2024-10-25 15:26:24.440104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:41.949 [2024-10-25 15:26:24.440118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.959 ms 00:21:41.949 [2024-10-25 15:26:24.440127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.949 [2024-10-25 15:26:24.440895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.949 [2024-10-25 15:26:24.440914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:41.949 [2024-10-25 15:26:24.440927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:21:41.949 [2024-10-25 15:26:24.440937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.949 [2024-10-25 15:26:24.525691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.949 [2024-10-25 15:26:24.525750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:41.949 [2024-10-25 15:26:24.525783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.866 ms 00:21:41.949 [2024-10-25 15:26:24.525800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.949 [2024-10-25 15:26:24.536868] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:41.949 [2024-10-25 15:26:24.539589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.949 [2024-10-25 15:26:24.539616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:41.949 [2024-10-25 15:26:24.539629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.762 ms 00:21:41.949 [2024-10-25 15:26:24.539640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.949 [2024-10-25 15:26:24.539723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.949 [2024-10-25 15:26:24.539736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:41.949 [2024-10-25 15:26:24.539747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:41.949 [2024-10-25 15:26:24.539758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.949 [2024-10-25 15:26:24.539851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.949 [2024-10-25 15:26:24.539869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:41.949 [2024-10-25 15:26:24.539880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:41.949 [2024-10-25 15:26:24.539890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.949 [2024-10-25 15:26:24.539912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.949 [2024-10-25 15:26:24.539923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:41.949 [2024-10-25 15:26:24.539934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:41.949 [2024-10-25 15:26:24.539944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.949 [2024-10-25 15:26:24.539982] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:41.949 [2024-10-25 15:26:24.539996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.950 [2024-10-25 15:26:24.540006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:41.950 [2024-10-25 15:26:24.540017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:41.950 [2024-10-25 15:26:24.540027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.950 [2024-10-25 15:26:24.575702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.950 [2024-10-25 15:26:24.575739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:41.950 [2024-10-25 15:26:24.575754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.712 ms 00:21:41.950 [2024-10-25 15:26:24.575765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.950 [2024-10-25 15:26:24.575872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.950 [2024-10-25 15:26:24.575888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:41.950 [2024-10-25 15:26:24.575900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:21:41.950 [2024-10-25 15:26:24.575910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.950 [2024-10-25 15:26:24.577014] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 385.143 ms, result 0 00:21:42.887  [2024-10-25T15:26:26.994Z] Copying: 24/1024 [MB] (24 MBps) [2024-10-25T15:26:27.932Z] Copying: 48/1024 [MB] (23 MBps) [2024-10-25T15:26:28.869Z] Copying: 73/1024 [MB] (24 MBps) [2024-10-25T15:26:29.806Z] Copying: 99/1024 [MB] (25 MBps) [2024-10-25T15:26:30.746Z] Copying: 124/1024 [MB] (25 MBps) [2024-10-25T15:26:31.681Z] Copying: 150/1024 [MB] (25 MBps) [2024-10-25T15:26:32.619Z] Copying: 176/1024 [MB] (25 MBps) [2024-10-25T15:26:33.996Z] Copying: 201/1024 [MB] (25 MBps) [2024-10-25T15:26:34.934Z] Copying: 228/1024 [MB] (26 MBps) [2024-10-25T15:26:35.902Z] Copying: 252/1024 [MB] (24 MBps) [2024-10-25T15:26:36.839Z] Copying: 278/1024 [MB] (26 MBps) [2024-10-25T15:26:37.777Z] Copying: 304/1024 [MB] (26 MBps) [2024-10-25T15:26:38.711Z] Copying: 331/1024 [MB] (26 MBps) [2024-10-25T15:26:39.672Z] Copying: 356/1024 [MB] (25 MBps) [2024-10-25T15:26:40.607Z] Copying: 382/1024 [MB] (26 MBps) [2024-10-25T15:26:41.581Z] Copying: 409/1024 [MB] (26 MBps) [2024-10-25T15:26:42.957Z] Copying: 436/1024 [MB] (27 MBps) [2024-10-25T15:26:43.893Z] Copying: 463/1024 [MB] (26 MBps) [2024-10-25T15:26:44.830Z] Copying: 489/1024 [MB] (26 MBps) [2024-10-25T15:26:45.779Z] Copying: 515/1024 [MB] (25 MBps) [2024-10-25T15:26:46.714Z] Copying: 541/1024 [MB] (25 MBps) [2024-10-25T15:26:47.649Z] Copying: 567/1024 [MB] (26 MBps) [2024-10-25T15:26:48.586Z] Copying: 593/1024 [MB] (25 MBps) [2024-10-25T15:26:49.966Z] Copying: 619/1024 [MB] (26 MBps) [2024-10-25T15:26:50.904Z] Copying: 645/1024 [MB] (26 MBps) [2024-10-25T15:26:51.839Z] Copying: 672/1024 [MB] (26 MBps) [2024-10-25T15:26:52.775Z] Copying: 699/1024 [MB] (26 MBps) [2024-10-25T15:26:53.710Z] Copying: 726/1024 [MB] (27 MBps) [2024-10-25T15:26:54.643Z] Copying: 753/1024 [MB] (27 MBps) [2024-10-25T15:26:55.580Z] Copying: 780/1024 [MB] (26 MBps) [2024-10-25T15:26:56.958Z] Copying: 805/1024 [MB] (25 MBps) [2024-10-25T15:26:57.894Z] Copying: 831/1024 [MB] (25 MBps) [2024-10-25T15:26:58.831Z] Copying: 856/1024 [MB] (25 MBps) [2024-10-25T15:26:59.768Z] Copying: 882/1024 [MB] (25 MBps) [2024-10-25T15:27:00.704Z] Copying: 909/1024 [MB] (26 MBps) [2024-10-25T15:27:01.642Z] Copying: 935/1024 [MB] (25 MBps) [2024-10-25T15:27:02.581Z] Copying: 960/1024 [MB] (25 MBps) [2024-10-25T15:27:03.961Z] Copying: 986/1024 [MB] (25 MBps) [2024-10-25T15:27:04.529Z] Copying: 1012/1024 [MB] (26 MBps) [2024-10-25T15:27:04.789Z] Copying: 1023/1024 [MB] (10 MBps) [2024-10-25T15:27:04.789Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-10-25 15:27:04.703327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.061 [2024-10-25 15:27:04.703391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:22.061 [2024-10-25 15:27:04.703406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:22.061 [2024-10-25 15:27:04.703417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.061 [2024-10-25 15:27:04.705410] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:22.061 [2024-10-25 15:27:04.711093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.061 [2024-10-25 15:27:04.711273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:22.061 [2024-10-25 15:27:04.711356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.603 ms 00:22:22.061 [2024-10-25 15:27:04.711392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.061 [2024-10-25 15:27:04.722820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.061 [2024-10-25 15:27:04.722977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:22.061 [2024-10-25 15:27:04.723061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.755 ms 00:22:22.061 [2024-10-25 15:27:04.723098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.061 [2024-10-25 15:27:04.745849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.061 [2024-10-25 15:27:04.746004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:22.061 [2024-10-25 15:27:04.746090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.737 ms 00:22:22.061 [2024-10-25 15:27:04.746126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.061 [2024-10-25 15:27:04.751065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.061 [2024-10-25 15:27:04.751208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:22.061 [2024-10-25 15:27:04.751324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.884 ms 00:22:22.061 [2024-10-25 15:27:04.751361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.061 [2024-10-25 15:27:04.787594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.061 [2024-10-25 15:27:04.787750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:22.061 [2024-10-25 15:27:04.787830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.202 ms 00:22:22.061 [2024-10-25 15:27:04.787865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.320 [2024-10-25 15:27:04.809743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.320 [2024-10-25 15:27:04.809903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:22.320 [2024-10-25 15:27:04.809991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.808 ms 00:22:22.320 [2024-10-25 15:27:04.810027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.320 [2024-10-25 15:27:04.933323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.320 [2024-10-25 15:27:04.933490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:22.320 [2024-10-25 15:27:04.933563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 123.437 ms 00:22:22.320 [2024-10-25 15:27:04.933598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.320 [2024-10-25 15:27:04.970097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.320 [2024-10-25 15:27:04.970241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:22.320 [2024-10-25 15:27:04.970326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.515 ms 00:22:22.320 [2024-10-25 15:27:04.970362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.320 [2024-10-25 15:27:05.004211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.320 [2024-10-25 15:27:05.004362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:22.320 [2024-10-25 15:27:05.004381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.804 ms 00:22:22.320 [2024-10-25 15:27:05.004392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.320 [2024-10-25 15:27:05.039190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.320 [2024-10-25 15:27:05.039230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:22.320 [2024-10-25 15:27:05.039243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.820 ms 00:22:22.321 [2024-10-25 15:27:05.039269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.581 [2024-10-25 15:27:05.072913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.581 [2024-10-25 15:27:05.072944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:22.581 [2024-10-25 15:27:05.072956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.625 ms 00:22:22.581 [2024-10-25 15:27:05.072965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.582 [2024-10-25 15:27:05.072999] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:22.582 [2024-10-25 15:27:05.073013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 110336 / 261120 wr_cnt: 1 state: open 00:22:22.582 [2024-10-25 15:27:05.073025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:22.582 [2024-10-25 15:27:05.073965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:22.583 [2024-10-25 15:27:05.073975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:22.583 [2024-10-25 15:27:05.073985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:22.583 [2024-10-25 15:27:05.073995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:22.583 [2024-10-25 15:27:05.074006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:22.583 [2024-10-25 15:27:05.074016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:22.583 [2024-10-25 15:27:05.074027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:22.583 [2024-10-25 15:27:05.074036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:22.583 [2024-10-25 15:27:05.074046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:22.583 [2024-10-25 15:27:05.074056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:22.583 [2024-10-25 15:27:05.074074] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:22.583 [2024-10-25 15:27:05.074084] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 52199eca-5cab-4185-b25a-1e7503e93f9a 00:22:22.583 [2024-10-25 15:27:05.074095] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 110336 00:22:22.583 [2024-10-25 15:27:05.074104] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 111296 00:22:22.583 [2024-10-25 15:27:05.074113] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 110336 00:22:22.583 [2024-10-25 15:27:05.074123] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0087 00:22:22.583 [2024-10-25 15:27:05.074133] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:22.583 [2024-10-25 15:27:05.074142] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:22.583 [2024-10-25 15:27:05.074168] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:22.583 [2024-10-25 15:27:05.074177] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:22.583 [2024-10-25 15:27:05.074186] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:22.583 [2024-10-25 15:27:05.074195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.583 [2024-10-25 15:27:05.074205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:22.583 [2024-10-25 15:27:05.074224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.199 ms 00:22:22.583 [2024-10-25 15:27:05.074234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.583 [2024-10-25 15:27:05.093685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.583 [2024-10-25 15:27:05.093715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:22.583 [2024-10-25 15:27:05.093727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.449 ms 00:22:22.583 [2024-10-25 15:27:05.093737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.583 [2024-10-25 15:27:05.094297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:22.583 [2024-10-25 15:27:05.094310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:22.583 [2024-10-25 15:27:05.094320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:22:22.583 [2024-10-25 15:27:05.094330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.583 [2024-10-25 15:27:05.142366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.583 [2024-10-25 15:27:05.142398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:22.583 [2024-10-25 15:27:05.142415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.583 [2024-10-25 15:27:05.142424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.583 [2024-10-25 15:27:05.142472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.583 [2024-10-25 15:27:05.142482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:22.583 [2024-10-25 15:27:05.142491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.583 [2024-10-25 15:27:05.142500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.583 [2024-10-25 15:27:05.142558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.583 [2024-10-25 15:27:05.142570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:22.583 [2024-10-25 15:27:05.142580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.583 [2024-10-25 15:27:05.142593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.583 [2024-10-25 15:27:05.142609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.583 [2024-10-25 15:27:05.142619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:22.583 [2024-10-25 15:27:05.142627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.583 [2024-10-25 15:27:05.142636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.583 [2024-10-25 15:27:05.263335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.583 [2024-10-25 15:27:05.263379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:22.583 [2024-10-25 15:27:05.263393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.583 [2024-10-25 15:27:05.263409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.843 [2024-10-25 15:27:05.362067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.843 [2024-10-25 15:27:05.362111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:22.843 [2024-10-25 15:27:05.362125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.843 [2024-10-25 15:27:05.362136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.843 [2024-10-25 15:27:05.362252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.843 [2024-10-25 15:27:05.362265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:22.843 [2024-10-25 15:27:05.362277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.843 [2024-10-25 15:27:05.362287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.843 [2024-10-25 15:27:05.362329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.843 [2024-10-25 15:27:05.362340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:22.843 [2024-10-25 15:27:05.362351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.843 [2024-10-25 15:27:05.362360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.843 [2024-10-25 15:27:05.362471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.843 [2024-10-25 15:27:05.362485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:22.843 [2024-10-25 15:27:05.362495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.843 [2024-10-25 15:27:05.362505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.843 [2024-10-25 15:27:05.362538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.843 [2024-10-25 15:27:05.362555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:22.843 [2024-10-25 15:27:05.362565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.843 [2024-10-25 15:27:05.362574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.843 [2024-10-25 15:27:05.362610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.843 [2024-10-25 15:27:05.362621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:22.843 [2024-10-25 15:27:05.362632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.843 [2024-10-25 15:27:05.362641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.843 [2024-10-25 15:27:05.362710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.843 [2024-10-25 15:27:05.362726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:22.843 [2024-10-25 15:27:05.362736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.843 [2024-10-25 15:27:05.362745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.843 [2024-10-25 15:27:05.362867] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 662.990 ms, result 0 00:22:24.238 00:22:24.238 00:22:24.238 15:27:06 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:24.238 [2024-10-25 15:27:06.916802] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:22:24.238 [2024-10-25 15:27:06.916914] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77730 ] 00:22:24.496 [2024-10-25 15:27:07.098051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.496 [2024-10-25 15:27:07.205095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.066 [2024-10-25 15:27:07.550770] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:25.066 [2024-10-25 15:27:07.550837] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:25.066 [2024-10-25 15:27:07.711035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.066 [2024-10-25 15:27:07.711086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:25.066 [2024-10-25 15:27:07.711105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:25.066 [2024-10-25 15:27:07.711115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.066 [2024-10-25 15:27:07.711175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.066 [2024-10-25 15:27:07.711188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:25.066 [2024-10-25 15:27:07.711213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:25.066 [2024-10-25 15:27:07.711223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.066 [2024-10-25 15:27:07.711244] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:25.066 [2024-10-25 15:27:07.712289] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:25.066 [2024-10-25 15:27:07.712326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.066 [2024-10-25 15:27:07.712337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:25.066 [2024-10-25 15:27:07.712348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.087 ms 00:22:25.066 [2024-10-25 15:27:07.712358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.066 [2024-10-25 15:27:07.713766] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:25.066 [2024-10-25 15:27:07.732165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.066 [2024-10-25 15:27:07.732212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:25.066 [2024-10-25 15:27:07.732242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.429 ms 00:22:25.066 [2024-10-25 15:27:07.732251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.066 [2024-10-25 15:27:07.732317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.066 [2024-10-25 15:27:07.732333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:25.066 [2024-10-25 15:27:07.732344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:25.066 [2024-10-25 15:27:07.732354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.067 [2024-10-25 15:27:07.739097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.067 [2024-10-25 15:27:07.739129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:25.067 [2024-10-25 15:27:07.739156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.684 ms 00:22:25.067 [2024-10-25 15:27:07.739166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.067 [2024-10-25 15:27:07.739268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.067 [2024-10-25 15:27:07.739282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:25.067 [2024-10-25 15:27:07.739292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:22:25.067 [2024-10-25 15:27:07.739301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.067 [2024-10-25 15:27:07.739340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.067 [2024-10-25 15:27:07.739351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:25.067 [2024-10-25 15:27:07.739362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:25.067 [2024-10-25 15:27:07.739371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.067 [2024-10-25 15:27:07.739394] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:25.067 [2024-10-25 15:27:07.743966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.067 [2024-10-25 15:27:07.744000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:25.067 [2024-10-25 15:27:07.744011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.584 ms 00:22:25.067 [2024-10-25 15:27:07.744042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.067 [2024-10-25 15:27:07.744071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.067 [2024-10-25 15:27:07.744081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:25.067 [2024-10-25 15:27:07.744092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:25.067 [2024-10-25 15:27:07.744101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.067 [2024-10-25 15:27:07.744154] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:25.067 [2024-10-25 15:27:07.744176] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:25.067 [2024-10-25 15:27:07.744220] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:25.067 [2024-10-25 15:27:07.744240] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:25.067 [2024-10-25 15:27:07.744338] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:25.067 [2024-10-25 15:27:07.744352] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:25.067 [2024-10-25 15:27:07.744365] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:25.067 [2024-10-25 15:27:07.744378] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:25.067 [2024-10-25 15:27:07.744389] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:25.067 [2024-10-25 15:27:07.744401] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:25.067 [2024-10-25 15:27:07.744411] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:25.067 [2024-10-25 15:27:07.744421] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:25.067 [2024-10-25 15:27:07.744430] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:25.067 [2024-10-25 15:27:07.744445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.067 [2024-10-25 15:27:07.744455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:25.067 [2024-10-25 15:27:07.744465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:22:25.067 [2024-10-25 15:27:07.744474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.067 [2024-10-25 15:27:07.744546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.067 [2024-10-25 15:27:07.744556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:25.067 [2024-10-25 15:27:07.744566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:25.067 [2024-10-25 15:27:07.744576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.067 [2024-10-25 15:27:07.744670] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:25.067 [2024-10-25 15:27:07.744688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:25.067 [2024-10-25 15:27:07.744699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:25.067 [2024-10-25 15:27:07.744709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.067 [2024-10-25 15:27:07.744720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:25.067 [2024-10-25 15:27:07.744729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:25.067 [2024-10-25 15:27:07.744739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:25.067 [2024-10-25 15:27:07.744748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:25.067 [2024-10-25 15:27:07.744757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:25.067 [2024-10-25 15:27:07.744766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:25.067 [2024-10-25 15:27:07.744776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:25.067 [2024-10-25 15:27:07.744785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:25.067 [2024-10-25 15:27:07.744795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:25.067 [2024-10-25 15:27:07.744804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:25.067 [2024-10-25 15:27:07.744813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:25.067 [2024-10-25 15:27:07.744831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.067 [2024-10-25 15:27:07.744841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:25.067 [2024-10-25 15:27:07.744850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:25.067 [2024-10-25 15:27:07.744859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.067 [2024-10-25 15:27:07.744868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:25.067 [2024-10-25 15:27:07.744877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:25.067 [2024-10-25 15:27:07.744886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:25.067 [2024-10-25 15:27:07.744896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:25.067 [2024-10-25 15:27:07.744905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:25.067 [2024-10-25 15:27:07.744914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:25.067 [2024-10-25 15:27:07.744923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:25.067 [2024-10-25 15:27:07.744932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:25.067 [2024-10-25 15:27:07.744941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:25.067 [2024-10-25 15:27:07.744949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:25.067 [2024-10-25 15:27:07.744958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:25.067 [2024-10-25 15:27:07.744967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:25.067 [2024-10-25 15:27:07.744976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:25.067 [2024-10-25 15:27:07.744986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:25.067 [2024-10-25 15:27:07.744995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:25.067 [2024-10-25 15:27:07.745004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:25.067 [2024-10-25 15:27:07.745013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:25.067 [2024-10-25 15:27:07.745022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:25.067 [2024-10-25 15:27:07.745030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:25.067 [2024-10-25 15:27:07.745040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:25.067 [2024-10-25 15:27:07.745048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.067 [2024-10-25 15:27:07.745057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:25.067 [2024-10-25 15:27:07.745066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:25.067 [2024-10-25 15:27:07.745076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.067 [2024-10-25 15:27:07.745085] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:25.067 [2024-10-25 15:27:07.745095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:25.067 [2024-10-25 15:27:07.745104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:25.067 [2024-10-25 15:27:07.745114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:25.067 [2024-10-25 15:27:07.745123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:25.067 [2024-10-25 15:27:07.745133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:25.067 [2024-10-25 15:27:07.745142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:25.067 [2024-10-25 15:27:07.745152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:25.067 [2024-10-25 15:27:07.745161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:25.067 [2024-10-25 15:27:07.745170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:25.067 [2024-10-25 15:27:07.745192] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:25.067 [2024-10-25 15:27:07.745204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:25.067 [2024-10-25 15:27:07.745215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:25.067 [2024-10-25 15:27:07.745225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:25.067 [2024-10-25 15:27:07.745235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:25.067 [2024-10-25 15:27:07.745245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:25.067 [2024-10-25 15:27:07.745255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:25.067 [2024-10-25 15:27:07.745266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:25.067 [2024-10-25 15:27:07.745276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:25.067 [2024-10-25 15:27:07.745286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:25.067 [2024-10-25 15:27:07.745296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:25.067 [2024-10-25 15:27:07.745307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:25.068 [2024-10-25 15:27:07.745317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:25.068 [2024-10-25 15:27:07.745327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:25.068 [2024-10-25 15:27:07.745337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:25.068 [2024-10-25 15:27:07.745347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:25.068 [2024-10-25 15:27:07.745357] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:25.068 [2024-10-25 15:27:07.745368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:25.068 [2024-10-25 15:27:07.745383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:25.068 [2024-10-25 15:27:07.745393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:25.068 [2024-10-25 15:27:07.745403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:25.068 [2024-10-25 15:27:07.745416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:25.068 [2024-10-25 15:27:07.745427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.068 [2024-10-25 15:27:07.745437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:25.068 [2024-10-25 15:27:07.745447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:22:25.068 [2024-10-25 15:27:07.745456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.068 [2024-10-25 15:27:07.781347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.068 [2024-10-25 15:27:07.781386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:25.068 [2024-10-25 15:27:07.781399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.904 ms 00:22:25.068 [2024-10-25 15:27:07.781409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.068 [2024-10-25 15:27:07.781498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.068 [2024-10-25 15:27:07.781513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:25.068 [2024-10-25 15:27:07.781524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:25.068 [2024-10-25 15:27:07.781533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.328 [2024-10-25 15:27:07.841273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.328 [2024-10-25 15:27:07.841312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:25.328 [2024-10-25 15:27:07.841325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.782 ms 00:22:25.328 [2024-10-25 15:27:07.841336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.328 [2024-10-25 15:27:07.841385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.328 [2024-10-25 15:27:07.841396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:25.328 [2024-10-25 15:27:07.841406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:25.328 [2024-10-25 15:27:07.841420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.328 [2024-10-25 15:27:07.841909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.328 [2024-10-25 15:27:07.841931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:25.328 [2024-10-25 15:27:07.841942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:22:25.328 [2024-10-25 15:27:07.841952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.328 [2024-10-25 15:27:07.842068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.328 [2024-10-25 15:27:07.842081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:25.328 [2024-10-25 15:27:07.842092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:22:25.328 [2024-10-25 15:27:07.842102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.328 [2024-10-25 15:27:07.860749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.328 [2024-10-25 15:27:07.860786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:25.328 [2024-10-25 15:27:07.860814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.652 ms 00:22:25.328 [2024-10-25 15:27:07.860829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.328 [2024-10-25 15:27:07.878695] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:25.328 [2024-10-25 15:27:07.878736] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:25.328 [2024-10-25 15:27:07.878767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.328 [2024-10-25 15:27:07.878778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:25.328 [2024-10-25 15:27:07.878789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.866 ms 00:22:25.328 [2024-10-25 15:27:07.878799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.328 [2024-10-25 15:27:07.907023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.328 [2024-10-25 15:27:07.907068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:25.328 [2024-10-25 15:27:07.907097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.230 ms 00:22:25.328 [2024-10-25 15:27:07.907107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.328 [2024-10-25 15:27:07.924337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.328 [2024-10-25 15:27:07.924384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:25.328 [2024-10-25 15:27:07.924396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.216 ms 00:22:25.328 [2024-10-25 15:27:07.924405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.328 [2024-10-25 15:27:07.941625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.328 [2024-10-25 15:27:07.941663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:25.328 [2024-10-25 15:27:07.941675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.196 ms 00:22:25.328 [2024-10-25 15:27:07.941683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.328 [2024-10-25 15:27:07.942472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.328 [2024-10-25 15:27:07.942502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:25.328 [2024-10-25 15:27:07.942514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:22:25.328 [2024-10-25 15:27:07.942523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.328 [2024-10-25 15:27:08.025459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.328 [2024-10-25 15:27:08.025521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:25.328 [2024-10-25 15:27:08.025538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.044 ms 00:22:25.328 [2024-10-25 15:27:08.025563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.328 [2024-10-25 15:27:08.037037] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:25.328 [2024-10-25 15:27:08.040220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.328 [2024-10-25 15:27:08.040255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:25.328 [2024-10-25 15:27:08.040270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.620 ms 00:22:25.328 [2024-10-25 15:27:08.040281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.328 [2024-10-25 15:27:08.040377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.328 [2024-10-25 15:27:08.040391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:25.328 [2024-10-25 15:27:08.040403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:25.328 [2024-10-25 15:27:08.040413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.328 [2024-10-25 15:27:08.041905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.328 [2024-10-25 15:27:08.041943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:25.328 [2024-10-25 15:27:08.041955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.447 ms 00:22:25.328 [2024-10-25 15:27:08.041965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.328 [2024-10-25 15:27:08.041996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.328 [2024-10-25 15:27:08.042007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:25.329 [2024-10-25 15:27:08.042018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:25.329 [2024-10-25 15:27:08.042028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.329 [2024-10-25 15:27:08.042091] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:25.329 [2024-10-25 15:27:08.042108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.329 [2024-10-25 15:27:08.042119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:25.329 [2024-10-25 15:27:08.042129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:25.329 [2024-10-25 15:27:08.042139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.587 [2024-10-25 15:27:08.078936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.588 [2024-10-25 15:27:08.078985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:25.588 [2024-10-25 15:27:08.078999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.833 ms 00:22:25.588 [2024-10-25 15:27:08.079010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.588 [2024-10-25 15:27:08.079151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.588 [2024-10-25 15:27:08.079169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:25.588 [2024-10-25 15:27:08.079192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:25.588 [2024-10-25 15:27:08.079202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.588 [2024-10-25 15:27:08.080332] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 369.413 ms, result 0 00:22:26.965  [2024-10-25T15:27:10.629Z] Copying: 22/1024 [MB] (22 MBps) [2024-10-25T15:27:11.567Z] Copying: 48/1024 [MB] (25 MBps) [2024-10-25T15:27:12.504Z] Copying: 73/1024 [MB] (25 MBps) [2024-10-25T15:27:13.441Z] Copying: 99/1024 [MB] (25 MBps) [2024-10-25T15:27:14.399Z] Copying: 125/1024 [MB] (25 MBps) [2024-10-25T15:27:15.336Z] Copying: 151/1024 [MB] (25 MBps) [2024-10-25T15:27:16.713Z] Copying: 177/1024 [MB] (25 MBps) [2024-10-25T15:27:17.280Z] Copying: 203/1024 [MB] (26 MBps) [2024-10-25T15:27:18.654Z] Copying: 230/1024 [MB] (26 MBps) [2024-10-25T15:27:19.589Z] Copying: 257/1024 [MB] (26 MBps) [2024-10-25T15:27:20.524Z] Copying: 283/1024 [MB] (26 MBps) [2024-10-25T15:27:21.459Z] Copying: 310/1024 [MB] (26 MBps) [2024-10-25T15:27:22.396Z] Copying: 337/1024 [MB] (27 MBps) [2024-10-25T15:27:23.333Z] Copying: 364/1024 [MB] (26 MBps) [2024-10-25T15:27:24.710Z] Copying: 390/1024 [MB] (26 MBps) [2024-10-25T15:27:25.277Z] Copying: 416/1024 [MB] (25 MBps) [2024-10-25T15:27:26.653Z] Copying: 441/1024 [MB] (25 MBps) [2024-10-25T15:27:27.592Z] Copying: 467/1024 [MB] (25 MBps) [2024-10-25T15:27:28.527Z] Copying: 493/1024 [MB] (26 MBps) [2024-10-25T15:27:29.489Z] Copying: 520/1024 [MB] (26 MBps) [2024-10-25T15:27:30.427Z] Copying: 546/1024 [MB] (26 MBps) [2024-10-25T15:27:31.365Z] Copying: 573/1024 [MB] (27 MBps) [2024-10-25T15:27:32.303Z] Copying: 600/1024 [MB] (26 MBps) [2024-10-25T15:27:33.703Z] Copying: 627/1024 [MB] (26 MBps) [2024-10-25T15:27:34.271Z] Copying: 654/1024 [MB] (27 MBps) [2024-10-25T15:27:35.650Z] Copying: 681/1024 [MB] (27 MBps) [2024-10-25T15:27:36.586Z] Copying: 708/1024 [MB] (26 MBps) [2024-10-25T15:27:37.523Z] Copying: 735/1024 [MB] (26 MBps) [2024-10-25T15:27:38.482Z] Copying: 761/1024 [MB] (26 MBps) [2024-10-25T15:27:39.420Z] Copying: 788/1024 [MB] (26 MBps) [2024-10-25T15:27:40.359Z] Copying: 815/1024 [MB] (27 MBps) [2024-10-25T15:27:41.302Z] Copying: 842/1024 [MB] (26 MBps) [2024-10-25T15:27:42.682Z] Copying: 869/1024 [MB] (27 MBps) [2024-10-25T15:27:43.250Z] Copying: 896/1024 [MB] (26 MBps) [2024-10-25T15:27:44.628Z] Copying: 923/1024 [MB] (26 MBps) [2024-10-25T15:27:45.565Z] Copying: 950/1024 [MB] (27 MBps) [2024-10-25T15:27:46.502Z] Copying: 977/1024 [MB] (27 MBps) [2024-10-25T15:27:47.070Z] Copying: 1005/1024 [MB] (27 MBps) [2024-10-25T15:27:47.639Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-10-25 15:27:47.346468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.911 [2024-10-25 15:27:47.346545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:04.911 [2024-10-25 15:27:47.346564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:04.911 [2024-10-25 15:27:47.346575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.911 [2024-10-25 15:27:47.346605] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:04.911 [2024-10-25 15:27:47.351686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.911 [2024-10-25 15:27:47.351744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:04.911 [2024-10-25 15:27:47.351759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.069 ms 00:23:04.911 [2024-10-25 15:27:47.351770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.911 [2024-10-25 15:27:47.351986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.911 [2024-10-25 15:27:47.352169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:04.911 [2024-10-25 15:27:47.352196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:23:04.911 [2024-10-25 15:27:47.352207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.911 [2024-10-25 15:27:47.356538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.911 [2024-10-25 15:27:47.356586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:04.911 [2024-10-25 15:27:47.356601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.317 ms 00:23:04.911 [2024-10-25 15:27:47.356785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.911 [2024-10-25 15:27:47.362723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.911 [2024-10-25 15:27:47.362763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:04.911 [2024-10-25 15:27:47.362777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.904 ms 00:23:04.911 [2024-10-25 15:27:47.362787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.911 [2024-10-25 15:27:47.399170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.911 [2024-10-25 15:27:47.399222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:04.911 [2024-10-25 15:27:47.399236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.388 ms 00:23:04.911 [2024-10-25 15:27:47.399262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.911 [2024-10-25 15:27:47.419544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.911 [2024-10-25 15:27:47.419588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:04.911 [2024-10-25 15:27:47.419608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.273 ms 00:23:04.911 [2024-10-25 15:27:47.419618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.911 [2024-10-25 15:27:47.566290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.911 [2024-10-25 15:27:47.566379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:04.911 [2024-10-25 15:27:47.566397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 146.845 ms 00:23:04.911 [2024-10-25 15:27:47.566408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.911 [2024-10-25 15:27:47.603354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.911 [2024-10-25 15:27:47.603418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:04.911 [2024-10-25 15:27:47.603434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.983 ms 00:23:04.911 [2024-10-25 15:27:47.603445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.171 [2024-10-25 15:27:47.638954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.171 [2024-10-25 15:27:47.639017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:05.171 [2024-10-25 15:27:47.639060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.518 ms 00:23:05.171 [2024-10-25 15:27:47.639070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.171 [2024-10-25 15:27:47.674343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.171 [2024-10-25 15:27:47.674404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:05.171 [2024-10-25 15:27:47.674436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.286 ms 00:23:05.171 [2024-10-25 15:27:47.674446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.171 [2024-10-25 15:27:47.710236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.171 [2024-10-25 15:27:47.710315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:05.171 [2024-10-25 15:27:47.710331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.751 ms 00:23:05.171 [2024-10-25 15:27:47.710342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.171 [2024-10-25 15:27:47.710389] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:05.171 [2024-10-25 15:27:47.710407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:23:05.171 [2024-10-25 15:27:47.710420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:05.171 [2024-10-25 15:27:47.710750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.710997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:05.172 [2024-10-25 15:27:47.711490] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:05.172 [2024-10-25 15:27:47.711500] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 52199eca-5cab-4185-b25a-1e7503e93f9a 00:23:05.172 [2024-10-25 15:27:47.711511] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:23:05.172 [2024-10-25 15:27:47.711521] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 21696 00:23:05.172 [2024-10-25 15:27:47.711531] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 20736 00:23:05.172 [2024-10-25 15:27:47.711541] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0463 00:23:05.172 [2024-10-25 15:27:47.711550] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:05.172 [2024-10-25 15:27:47.711561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:05.172 [2024-10-25 15:27:47.711578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:05.172 [2024-10-25 15:27:47.711599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:05.172 [2024-10-25 15:27:47.711608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:05.172 [2024-10-25 15:27:47.711618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.172 [2024-10-25 15:27:47.711629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:05.172 [2024-10-25 15:27:47.711640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.232 ms 00:23:05.172 [2024-10-25 15:27:47.711649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.172 [2024-10-25 15:27:47.731935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.172 [2024-10-25 15:27:47.731988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:05.172 [2024-10-25 15:27:47.732004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.274 ms 00:23:05.172 [2024-10-25 15:27:47.732014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.172 [2024-10-25 15:27:47.732577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.172 [2024-10-25 15:27:47.732597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:05.172 [2024-10-25 15:27:47.732608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:23:05.172 [2024-10-25 15:27:47.732618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.172 [2024-10-25 15:27:47.783351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.172 [2024-10-25 15:27:47.783419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:05.172 [2024-10-25 15:27:47.783439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.172 [2024-10-25 15:27:47.783451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.172 [2024-10-25 15:27:47.783523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.172 [2024-10-25 15:27:47.783534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:05.172 [2024-10-25 15:27:47.783545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.172 [2024-10-25 15:27:47.783555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.172 [2024-10-25 15:27:47.783655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.172 [2024-10-25 15:27:47.783668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:05.172 [2024-10-25 15:27:47.783679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.173 [2024-10-25 15:27:47.783694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.173 [2024-10-25 15:27:47.783712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.173 [2024-10-25 15:27:47.783722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:05.173 [2024-10-25 15:27:47.783732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.173 [2024-10-25 15:27:47.783742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.431 [2024-10-25 15:27:47.903538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.431 [2024-10-25 15:27:47.903623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:05.431 [2024-10-25 15:27:47.903639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.431 [2024-10-25 15:27:47.903655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.431 [2024-10-25 15:27:48.001870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.431 [2024-10-25 15:27:48.001950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:05.431 [2024-10-25 15:27:48.001965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.431 [2024-10-25 15:27:48.001975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.431 [2024-10-25 15:27:48.002070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.431 [2024-10-25 15:27:48.002083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:05.431 [2024-10-25 15:27:48.002093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.431 [2024-10-25 15:27:48.002104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.431 [2024-10-25 15:27:48.002144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.431 [2024-10-25 15:27:48.002156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:05.431 [2024-10-25 15:27:48.002166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.431 [2024-10-25 15:27:48.002175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.431 [2024-10-25 15:27:48.002335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.431 [2024-10-25 15:27:48.002349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:05.431 [2024-10-25 15:27:48.002359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.431 [2024-10-25 15:27:48.002369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.431 [2024-10-25 15:27:48.002403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.431 [2024-10-25 15:27:48.002419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:05.431 [2024-10-25 15:27:48.002430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.431 [2024-10-25 15:27:48.002440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.431 [2024-10-25 15:27:48.002476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.431 [2024-10-25 15:27:48.002486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:05.431 [2024-10-25 15:27:48.002497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.431 [2024-10-25 15:27:48.002506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.431 [2024-10-25 15:27:48.002551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.431 [2024-10-25 15:27:48.002563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:05.431 [2024-10-25 15:27:48.002573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.431 [2024-10-25 15:27:48.002582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.431 [2024-10-25 15:27:48.002695] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 657.262 ms, result 0 00:23:06.366 00:23:06.366 00:23:06.366 15:27:49 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:08.269 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:08.269 15:27:50 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:08.269 15:27:50 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:08.269 15:27:50 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:08.269 15:27:50 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:08.269 15:27:50 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:08.269 Process with pid 76200 is not found 00:23:08.269 15:27:50 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 76200 00:23:08.269 15:27:50 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 76200 ']' 00:23:08.269 15:27:50 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 76200 00:23:08.269 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76200) - No such process 00:23:08.269 15:27:50 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 76200 is not found' 00:23:08.269 15:27:50 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:08.269 Remove shared memory files 00:23:08.269 15:27:50 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:08.269 15:27:50 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:08.269 15:27:50 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:08.269 15:27:50 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:08.269 15:27:50 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:08.269 15:27:50 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:08.269 00:23:08.269 real 3m10.905s 00:23:08.269 user 2m58.003s 00:23:08.269 sys 0m13.735s 00:23:08.269 ************************************ 00:23:08.269 END TEST ftl_restore 00:23:08.269 ************************************ 00:23:08.269 15:27:50 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:08.269 15:27:50 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:08.269 15:27:50 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:08.269 15:27:50 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:08.269 15:27:50 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:08.269 15:27:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:08.269 ************************************ 00:23:08.269 START TEST ftl_dirty_shutdown 00:23:08.269 ************************************ 00:23:08.269 15:27:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:08.528 * Looking for test storage... 00:23:08.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1689 -- # lcov --version 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:23:08.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.528 --rc genhtml_branch_coverage=1 00:23:08.528 --rc genhtml_function_coverage=1 00:23:08.528 --rc genhtml_legend=1 00:23:08.528 --rc geninfo_all_blocks=1 00:23:08.528 --rc geninfo_unexecuted_blocks=1 00:23:08.528 00:23:08.528 ' 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:23:08.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.528 --rc genhtml_branch_coverage=1 00:23:08.528 --rc genhtml_function_coverage=1 00:23:08.528 --rc genhtml_legend=1 00:23:08.528 --rc geninfo_all_blocks=1 00:23:08.528 --rc geninfo_unexecuted_blocks=1 00:23:08.528 00:23:08.528 ' 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:23:08.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.528 --rc genhtml_branch_coverage=1 00:23:08.528 --rc genhtml_function_coverage=1 00:23:08.528 --rc genhtml_legend=1 00:23:08.528 --rc geninfo_all_blocks=1 00:23:08.528 --rc geninfo_unexecuted_blocks=1 00:23:08.528 00:23:08.528 ' 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:23:08.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:08.528 --rc genhtml_branch_coverage=1 00:23:08.528 --rc genhtml_function_coverage=1 00:23:08.528 --rc genhtml_legend=1 00:23:08.528 --rc geninfo_all_blocks=1 00:23:08.528 --rc geninfo_unexecuted_blocks=1 00:23:08.528 00:23:08.528 ' 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78245 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78245 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78245 ']' 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:08.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:08.528 15:27:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:08.787 [2024-10-25 15:27:51.319089] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:23:08.787 [2024-10-25 15:27:51.319650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78245 ] 00:23:08.787 [2024-10-25 15:27:51.499878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.045 [2024-10-25 15:27:51.613696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.981 15:27:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:09.981 15:27:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:23:09.981 15:27:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:09.981 15:27:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:09.981 15:27:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:09.981 15:27:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:09.981 15:27:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:09.981 15:27:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:10.241 15:27:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:10.241 15:27:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:10.241 15:27:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:10.241 15:27:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:10.241 15:27:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:10.241 15:27:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:10.241 15:27:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:10.241 15:27:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:10.241 15:27:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:10.241 { 00:23:10.241 "name": "nvme0n1", 00:23:10.241 "aliases": [ 00:23:10.241 "a596421a-cced-4469-80c5-cfb6d0de0d22" 00:23:10.241 ], 00:23:10.241 "product_name": "NVMe disk", 00:23:10.241 "block_size": 4096, 00:23:10.241 "num_blocks": 1310720, 00:23:10.241 "uuid": "a596421a-cced-4469-80c5-cfb6d0de0d22", 00:23:10.241 "numa_id": -1, 00:23:10.241 "assigned_rate_limits": { 00:23:10.241 "rw_ios_per_sec": 0, 00:23:10.241 "rw_mbytes_per_sec": 0, 00:23:10.241 "r_mbytes_per_sec": 0, 00:23:10.241 "w_mbytes_per_sec": 0 00:23:10.241 }, 00:23:10.241 "claimed": true, 00:23:10.241 "claim_type": "read_many_write_one", 00:23:10.241 "zoned": false, 00:23:10.241 "supported_io_types": { 00:23:10.241 "read": true, 00:23:10.241 "write": true, 00:23:10.241 "unmap": true, 00:23:10.241 "flush": true, 00:23:10.241 "reset": true, 00:23:10.241 "nvme_admin": true, 00:23:10.241 "nvme_io": true, 00:23:10.241 "nvme_io_md": false, 00:23:10.241 "write_zeroes": true, 00:23:10.241 "zcopy": false, 00:23:10.241 "get_zone_info": false, 00:23:10.241 "zone_management": false, 00:23:10.241 "zone_append": false, 00:23:10.241 "compare": true, 00:23:10.241 "compare_and_write": false, 00:23:10.241 "abort": true, 00:23:10.241 "seek_hole": false, 00:23:10.241 "seek_data": false, 00:23:10.241 "copy": true, 00:23:10.241 "nvme_iov_md": false 00:23:10.241 }, 00:23:10.241 "driver_specific": { 00:23:10.241 "nvme": [ 00:23:10.241 { 00:23:10.241 "pci_address": "0000:00:11.0", 00:23:10.241 "trid": { 00:23:10.241 "trtype": "PCIe", 00:23:10.241 "traddr": "0000:00:11.0" 00:23:10.241 }, 00:23:10.241 "ctrlr_data": { 00:23:10.241 "cntlid": 0, 00:23:10.241 "vendor_id": "0x1b36", 00:23:10.241 "model_number": "QEMU NVMe Ctrl", 00:23:10.241 "serial_number": "12341", 00:23:10.241 "firmware_revision": "8.0.0", 00:23:10.241 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:10.241 "oacs": { 00:23:10.241 "security": 0, 00:23:10.241 "format": 1, 00:23:10.241 "firmware": 0, 00:23:10.241 "ns_manage": 1 00:23:10.241 }, 00:23:10.241 "multi_ctrlr": false, 00:23:10.241 "ana_reporting": false 00:23:10.241 }, 00:23:10.241 "vs": { 00:23:10.241 "nvme_version": "1.4" 00:23:10.241 }, 00:23:10.241 "ns_data": { 00:23:10.241 "id": 1, 00:23:10.241 "can_share": false 00:23:10.241 } 00:23:10.241 } 00:23:10.241 ], 00:23:10.241 "mp_policy": "active_passive" 00:23:10.241 } 00:23:10.241 } 00:23:10.241 ]' 00:23:10.241 15:27:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:10.505 15:27:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:10.505 15:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:10.505 15:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:10.505 15:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:10.505 15:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:23:10.505 15:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:10.505 15:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:10.505 15:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:10.505 15:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:10.505 15:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:10.763 15:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=5adbfdb8-8d94-4eb8-9f2f-69782688ddb5 00:23:10.763 15:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:10.763 15:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5adbfdb8-8d94-4eb8-9f2f-69782688ddb5 00:23:10.763 15:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:11.021 15:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=63074878-b802-43f7-82d2-130c1ec00bc7 00:23:11.021 15:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 63074878-b802-43f7-82d2-130c1ec00bc7 00:23:11.280 15:27:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=e58ac902-d546-4dba-bc4f-0a8c86cf42e1 00:23:11.280 15:27:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:11.280 15:27:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e58ac902-d546-4dba-bc4f-0a8c86cf42e1 00:23:11.280 15:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:11.280 15:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:11.280 15:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=e58ac902-d546-4dba-bc4f-0a8c86cf42e1 00:23:11.280 15:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:11.280 15:27:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size e58ac902-d546-4dba-bc4f-0a8c86cf42e1 00:23:11.280 15:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=e58ac902-d546-4dba-bc4f-0a8c86cf42e1 00:23:11.280 15:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:11.280 15:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:11.280 15:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:11.280 15:27:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e58ac902-d546-4dba-bc4f-0a8c86cf42e1 00:23:11.538 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:11.538 { 00:23:11.538 "name": "e58ac902-d546-4dba-bc4f-0a8c86cf42e1", 00:23:11.538 "aliases": [ 00:23:11.538 "lvs/nvme0n1p0" 00:23:11.538 ], 00:23:11.538 "product_name": "Logical Volume", 00:23:11.538 "block_size": 4096, 00:23:11.538 "num_blocks": 26476544, 00:23:11.538 "uuid": "e58ac902-d546-4dba-bc4f-0a8c86cf42e1", 00:23:11.538 "assigned_rate_limits": { 00:23:11.538 "rw_ios_per_sec": 0, 00:23:11.538 "rw_mbytes_per_sec": 0, 00:23:11.538 "r_mbytes_per_sec": 0, 00:23:11.538 "w_mbytes_per_sec": 0 00:23:11.538 }, 00:23:11.538 "claimed": false, 00:23:11.538 "zoned": false, 00:23:11.538 "supported_io_types": { 00:23:11.538 "read": true, 00:23:11.538 "write": true, 00:23:11.538 "unmap": true, 00:23:11.538 "flush": false, 00:23:11.538 "reset": true, 00:23:11.538 "nvme_admin": false, 00:23:11.538 "nvme_io": false, 00:23:11.538 "nvme_io_md": false, 00:23:11.538 "write_zeroes": true, 00:23:11.538 "zcopy": false, 00:23:11.538 "get_zone_info": false, 00:23:11.538 "zone_management": false, 00:23:11.538 "zone_append": false, 00:23:11.538 "compare": false, 00:23:11.538 "compare_and_write": false, 00:23:11.538 "abort": false, 00:23:11.538 "seek_hole": true, 00:23:11.538 "seek_data": true, 00:23:11.538 "copy": false, 00:23:11.538 "nvme_iov_md": false 00:23:11.538 }, 00:23:11.538 "driver_specific": { 00:23:11.538 "lvol": { 00:23:11.538 "lvol_store_uuid": "63074878-b802-43f7-82d2-130c1ec00bc7", 00:23:11.538 "base_bdev": "nvme0n1", 00:23:11.538 "thin_provision": true, 00:23:11.538 "num_allocated_clusters": 0, 00:23:11.538 "snapshot": false, 00:23:11.538 "clone": false, 00:23:11.538 "esnap_clone": false 00:23:11.538 } 00:23:11.538 } 00:23:11.538 } 00:23:11.538 ]' 00:23:11.538 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:11.538 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:11.538 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:11.538 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:11.538 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:11.538 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:11.538 15:27:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:11.538 15:27:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:11.538 15:27:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:11.797 15:27:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:11.797 15:27:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:11.798 15:27:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size e58ac902-d546-4dba-bc4f-0a8c86cf42e1 00:23:11.798 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=e58ac902-d546-4dba-bc4f-0a8c86cf42e1 00:23:11.798 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:11.798 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:11.798 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:11.798 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e58ac902-d546-4dba-bc4f-0a8c86cf42e1 00:23:12.057 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:12.057 { 00:23:12.057 "name": "e58ac902-d546-4dba-bc4f-0a8c86cf42e1", 00:23:12.057 "aliases": [ 00:23:12.057 "lvs/nvme0n1p0" 00:23:12.057 ], 00:23:12.057 "product_name": "Logical Volume", 00:23:12.057 "block_size": 4096, 00:23:12.057 "num_blocks": 26476544, 00:23:12.057 "uuid": "e58ac902-d546-4dba-bc4f-0a8c86cf42e1", 00:23:12.057 "assigned_rate_limits": { 00:23:12.057 "rw_ios_per_sec": 0, 00:23:12.057 "rw_mbytes_per_sec": 0, 00:23:12.057 "r_mbytes_per_sec": 0, 00:23:12.057 "w_mbytes_per_sec": 0 00:23:12.057 }, 00:23:12.057 "claimed": false, 00:23:12.057 "zoned": false, 00:23:12.057 "supported_io_types": { 00:23:12.057 "read": true, 00:23:12.057 "write": true, 00:23:12.057 "unmap": true, 00:23:12.057 "flush": false, 00:23:12.057 "reset": true, 00:23:12.057 "nvme_admin": false, 00:23:12.057 "nvme_io": false, 00:23:12.057 "nvme_io_md": false, 00:23:12.057 "write_zeroes": true, 00:23:12.057 "zcopy": false, 00:23:12.057 "get_zone_info": false, 00:23:12.057 "zone_management": false, 00:23:12.057 "zone_append": false, 00:23:12.057 "compare": false, 00:23:12.057 "compare_and_write": false, 00:23:12.057 "abort": false, 00:23:12.057 "seek_hole": true, 00:23:12.057 "seek_data": true, 00:23:12.057 "copy": false, 00:23:12.057 "nvme_iov_md": false 00:23:12.057 }, 00:23:12.057 "driver_specific": { 00:23:12.057 "lvol": { 00:23:12.057 "lvol_store_uuid": "63074878-b802-43f7-82d2-130c1ec00bc7", 00:23:12.057 "base_bdev": "nvme0n1", 00:23:12.057 "thin_provision": true, 00:23:12.057 "num_allocated_clusters": 0, 00:23:12.057 "snapshot": false, 00:23:12.057 "clone": false, 00:23:12.057 "esnap_clone": false 00:23:12.057 } 00:23:12.057 } 00:23:12.057 } 00:23:12.057 ]' 00:23:12.057 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:12.057 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:12.057 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:12.057 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:12.057 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:12.058 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:12.058 15:27:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:12.058 15:27:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:12.316 15:27:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:12.316 15:27:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size e58ac902-d546-4dba-bc4f-0a8c86cf42e1 00:23:12.316 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=e58ac902-d546-4dba-bc4f-0a8c86cf42e1 00:23:12.316 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:12.316 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:12.316 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:12.317 15:27:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e58ac902-d546-4dba-bc4f-0a8c86cf42e1 00:23:12.575 15:27:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:12.575 { 00:23:12.575 "name": "e58ac902-d546-4dba-bc4f-0a8c86cf42e1", 00:23:12.575 "aliases": [ 00:23:12.575 "lvs/nvme0n1p0" 00:23:12.575 ], 00:23:12.575 "product_name": "Logical Volume", 00:23:12.575 "block_size": 4096, 00:23:12.575 "num_blocks": 26476544, 00:23:12.575 "uuid": "e58ac902-d546-4dba-bc4f-0a8c86cf42e1", 00:23:12.575 "assigned_rate_limits": { 00:23:12.575 "rw_ios_per_sec": 0, 00:23:12.575 "rw_mbytes_per_sec": 0, 00:23:12.575 "r_mbytes_per_sec": 0, 00:23:12.575 "w_mbytes_per_sec": 0 00:23:12.575 }, 00:23:12.575 "claimed": false, 00:23:12.575 "zoned": false, 00:23:12.575 "supported_io_types": { 00:23:12.575 "read": true, 00:23:12.575 "write": true, 00:23:12.575 "unmap": true, 00:23:12.575 "flush": false, 00:23:12.575 "reset": true, 00:23:12.575 "nvme_admin": false, 00:23:12.575 "nvme_io": false, 00:23:12.575 "nvme_io_md": false, 00:23:12.575 "write_zeroes": true, 00:23:12.575 "zcopy": false, 00:23:12.575 "get_zone_info": false, 00:23:12.575 "zone_management": false, 00:23:12.575 "zone_append": false, 00:23:12.576 "compare": false, 00:23:12.576 "compare_and_write": false, 00:23:12.576 "abort": false, 00:23:12.576 "seek_hole": true, 00:23:12.576 "seek_data": true, 00:23:12.576 "copy": false, 00:23:12.576 "nvme_iov_md": false 00:23:12.576 }, 00:23:12.576 "driver_specific": { 00:23:12.576 "lvol": { 00:23:12.576 "lvol_store_uuid": "63074878-b802-43f7-82d2-130c1ec00bc7", 00:23:12.576 "base_bdev": "nvme0n1", 00:23:12.576 "thin_provision": true, 00:23:12.576 "num_allocated_clusters": 0, 00:23:12.576 "snapshot": false, 00:23:12.576 "clone": false, 00:23:12.576 "esnap_clone": false 00:23:12.576 } 00:23:12.576 } 00:23:12.576 } 00:23:12.576 ]' 00:23:12.576 15:27:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:12.576 15:27:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:12.576 15:27:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:12.576 15:27:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:12.576 15:27:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:12.576 15:27:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:12.576 15:27:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:12.576 15:27:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e58ac902-d546-4dba-bc4f-0a8c86cf42e1 --l2p_dram_limit 10' 00:23:12.576 15:27:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:12.576 15:27:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:12.576 15:27:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:12.576 15:27:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e58ac902-d546-4dba-bc4f-0a8c86cf42e1 --l2p_dram_limit 10 -c nvc0n1p0 00:23:12.835 [2024-10-25 15:27:55.401198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.835 [2024-10-25 15:27:55.401271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:12.835 [2024-10-25 15:27:55.401291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:12.835 [2024-10-25 15:27:55.401302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.835 [2024-10-25 15:27:55.401368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.835 [2024-10-25 15:27:55.401381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:12.835 [2024-10-25 15:27:55.401393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:12.835 [2024-10-25 15:27:55.401403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.836 [2024-10-25 15:27:55.401435] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:12.836 [2024-10-25 15:27:55.402467] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:12.836 [2024-10-25 15:27:55.402648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.836 [2024-10-25 15:27:55.402666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:12.836 [2024-10-25 15:27:55.402683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.219 ms 00:23:12.836 [2024-10-25 15:27:55.402693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.836 [2024-10-25 15:27:55.402845] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID bc8ab459-38ff-48ba-a6d3-54204f33fed3 00:23:12.836 [2024-10-25 15:27:55.404313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.836 [2024-10-25 15:27:55.404350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:12.836 [2024-10-25 15:27:55.404363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:12.836 [2024-10-25 15:27:55.404376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.836 [2024-10-25 15:27:55.411947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.836 [2024-10-25 15:27:55.411979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:12.836 [2024-10-25 15:27:55.411992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.537 ms 00:23:12.836 [2024-10-25 15:27:55.412008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.836 [2024-10-25 15:27:55.412112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.836 [2024-10-25 15:27:55.412129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:12.836 [2024-10-25 15:27:55.412140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:23:12.836 [2024-10-25 15:27:55.412158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.836 [2024-10-25 15:27:55.412236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.836 [2024-10-25 15:27:55.412252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:12.836 [2024-10-25 15:27:55.412263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:12.836 [2024-10-25 15:27:55.412276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.836 [2024-10-25 15:27:55.412303] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:12.836 [2024-10-25 15:27:55.417312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.836 [2024-10-25 15:27:55.417346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:12.836 [2024-10-25 15:27:55.417360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.023 ms 00:23:12.836 [2024-10-25 15:27:55.417374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.836 [2024-10-25 15:27:55.417411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.836 [2024-10-25 15:27:55.417422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:12.836 [2024-10-25 15:27:55.417434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:12.836 [2024-10-25 15:27:55.417444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.836 [2024-10-25 15:27:55.417483] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:12.836 [2024-10-25 15:27:55.417607] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:12.836 [2024-10-25 15:27:55.417626] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:12.836 [2024-10-25 15:27:55.417639] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:12.836 [2024-10-25 15:27:55.417654] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:12.836 [2024-10-25 15:27:55.417666] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:12.836 [2024-10-25 15:27:55.417680] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:12.836 [2024-10-25 15:27:55.417690] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:12.836 [2024-10-25 15:27:55.417702] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:12.836 [2024-10-25 15:27:55.417711] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:12.836 [2024-10-25 15:27:55.417727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.836 [2024-10-25 15:27:55.417737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:12.836 [2024-10-25 15:27:55.417749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:23:12.836 [2024-10-25 15:27:55.417770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.836 [2024-10-25 15:27:55.417845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.836 [2024-10-25 15:27:55.417856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:12.836 [2024-10-25 15:27:55.417869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:12.836 [2024-10-25 15:27:55.417879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.836 [2024-10-25 15:27:55.417966] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:12.836 [2024-10-25 15:27:55.417980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:12.836 [2024-10-25 15:27:55.417993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:12.836 [2024-10-25 15:27:55.418003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:12.836 [2024-10-25 15:27:55.418016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:12.836 [2024-10-25 15:27:55.418025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:12.836 [2024-10-25 15:27:55.418037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:12.836 [2024-10-25 15:27:55.418046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:12.836 [2024-10-25 15:27:55.418057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:12.836 [2024-10-25 15:27:55.418066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:12.836 [2024-10-25 15:27:55.418078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:12.836 [2024-10-25 15:27:55.418088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:12.836 [2024-10-25 15:27:55.418099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:12.836 [2024-10-25 15:27:55.418108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:12.836 [2024-10-25 15:27:55.418120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:12.836 [2024-10-25 15:27:55.418129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:12.836 [2024-10-25 15:27:55.418142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:12.836 [2024-10-25 15:27:55.418151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:12.836 [2024-10-25 15:27:55.418162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:12.836 [2024-10-25 15:27:55.418171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:12.836 [2024-10-25 15:27:55.418214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:12.836 [2024-10-25 15:27:55.418243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:12.836 [2024-10-25 15:27:55.418255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:12.836 [2024-10-25 15:27:55.418264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:12.836 [2024-10-25 15:27:55.418275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:12.836 [2024-10-25 15:27:55.418285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:12.836 [2024-10-25 15:27:55.418296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:12.836 [2024-10-25 15:27:55.418307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:12.836 [2024-10-25 15:27:55.418318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:12.836 [2024-10-25 15:27:55.418328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:12.836 [2024-10-25 15:27:55.418340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:12.836 [2024-10-25 15:27:55.418349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:12.836 [2024-10-25 15:27:55.418362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:12.836 [2024-10-25 15:27:55.418371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:12.836 [2024-10-25 15:27:55.418383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:12.836 [2024-10-25 15:27:55.418392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:12.836 [2024-10-25 15:27:55.418404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:12.836 [2024-10-25 15:27:55.418413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:12.836 [2024-10-25 15:27:55.418424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:12.836 [2024-10-25 15:27:55.418433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:12.836 [2024-10-25 15:27:55.418445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:12.836 [2024-10-25 15:27:55.418454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:12.836 [2024-10-25 15:27:55.418465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:12.836 [2024-10-25 15:27:55.418474] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:12.836 [2024-10-25 15:27:55.418487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:12.836 [2024-10-25 15:27:55.418497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:12.836 [2024-10-25 15:27:55.418509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:12.836 [2024-10-25 15:27:55.418519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:12.836 [2024-10-25 15:27:55.418535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:12.836 [2024-10-25 15:27:55.418544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:12.836 [2024-10-25 15:27:55.418556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:12.836 [2024-10-25 15:27:55.418565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:12.836 [2024-10-25 15:27:55.418577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:12.836 [2024-10-25 15:27:55.418593] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:12.836 [2024-10-25 15:27:55.418608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:12.836 [2024-10-25 15:27:55.418619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:12.836 [2024-10-25 15:27:55.418632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:12.836 [2024-10-25 15:27:55.418642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:12.836 [2024-10-25 15:27:55.418655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:12.837 [2024-10-25 15:27:55.418666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:12.837 [2024-10-25 15:27:55.418678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:12.837 [2024-10-25 15:27:55.418689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:12.837 [2024-10-25 15:27:55.418702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:12.837 [2024-10-25 15:27:55.418712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:12.837 [2024-10-25 15:27:55.418727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:12.837 [2024-10-25 15:27:55.418746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:12.837 [2024-10-25 15:27:55.418759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:12.837 [2024-10-25 15:27:55.418769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:12.837 [2024-10-25 15:27:55.418782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:12.837 [2024-10-25 15:27:55.418792] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:12.837 [2024-10-25 15:27:55.418805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:12.837 [2024-10-25 15:27:55.418820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:12.837 [2024-10-25 15:27:55.418833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:12.837 [2024-10-25 15:27:55.418844] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:12.837 [2024-10-25 15:27:55.418856] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:12.837 [2024-10-25 15:27:55.418867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:12.837 [2024-10-25 15:27:55.418880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:12.837 [2024-10-25 15:27:55.418891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:23:12.837 [2024-10-25 15:27:55.418903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:12.837 [2024-10-25 15:27:55.418947] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:12.837 [2024-10-25 15:27:55.418964] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:16.122 [2024-10-25 15:27:58.797730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.122 [2024-10-25 15:27:58.797786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:16.122 [2024-10-25 15:27:58.797802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3384.267 ms 00:23:16.122 [2024-10-25 15:27:58.797815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.122 [2024-10-25 15:27:58.836051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.122 [2024-10-25 15:27:58.836266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:16.122 [2024-10-25 15:27:58.836292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.018 ms 00:23:16.122 [2024-10-25 15:27:58.836306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.122 [2024-10-25 15:27:58.836462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.122 [2024-10-25 15:27:58.836478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:16.122 [2024-10-25 15:27:58.836490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:16.122 [2024-10-25 15:27:58.836505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.380 [2024-10-25 15:27:58.882281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.380 [2024-10-25 15:27:58.882430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:16.381 [2024-10-25 15:27:58.882581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.807 ms 00:23:16.381 [2024-10-25 15:27:58.882625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.381 [2024-10-25 15:27:58.882686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.381 [2024-10-25 15:27:58.882854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:16.381 [2024-10-25 15:27:58.882962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:16.381 [2024-10-25 15:27:58.883009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.381 [2024-10-25 15:27:58.883535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.381 [2024-10-25 15:27:58.883593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:16.381 [2024-10-25 15:27:58.883701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:23:16.381 [2024-10-25 15:27:58.883742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.381 [2024-10-25 15:27:58.883875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.381 [2024-10-25 15:27:58.883973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:16.381 [2024-10-25 15:27:58.884015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:23:16.381 [2024-10-25 15:27:58.884053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.381 [2024-10-25 15:27:58.904454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.381 [2024-10-25 15:27:58.904599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:16.381 [2024-10-25 15:27:58.904786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.339 ms 00:23:16.381 [2024-10-25 15:27:58.904834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.381 [2024-10-25 15:27:58.918012] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:16.381 [2024-10-25 15:27:58.921408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.381 [2024-10-25 15:27:58.921555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:16.381 [2024-10-25 15:27:58.921782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.474 ms 00:23:16.381 [2024-10-25 15:27:58.921820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.381 [2024-10-25 15:27:59.029415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.381 [2024-10-25 15:27:59.029682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:16.381 [2024-10-25 15:27:59.029773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.699 ms 00:23:16.381 [2024-10-25 15:27:59.029810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.381 [2024-10-25 15:27:59.030026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.381 [2024-10-25 15:27:59.030119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:16.381 [2024-10-25 15:27:59.030227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:23:16.381 [2024-10-25 15:27:59.030265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.381 [2024-10-25 15:27:59.067801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.381 [2024-10-25 15:27:59.068037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:16.381 [2024-10-25 15:27:59.068135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.499 ms 00:23:16.381 [2024-10-25 15:27:59.068172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.381 [2024-10-25 15:27:59.104974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.381 [2024-10-25 15:27:59.105274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:16.381 [2024-10-25 15:27:59.105469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.718 ms 00:23:16.381 [2024-10-25 15:27:59.105503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.381 [2024-10-25 15:27:59.106256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.381 [2024-10-25 15:27:59.106379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:16.381 [2024-10-25 15:27:59.106482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:23:16.381 [2024-10-25 15:27:59.106519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.640 [2024-10-25 15:27:59.210494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.640 [2024-10-25 15:27:59.210752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:16.640 [2024-10-25 15:27:59.210859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.018 ms 00:23:16.640 [2024-10-25 15:27:59.210896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.640 [2024-10-25 15:27:59.249125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.640 [2024-10-25 15:27:59.249415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:16.640 [2024-10-25 15:27:59.249517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.131 ms 00:23:16.640 [2024-10-25 15:27:59.249554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.640 [2024-10-25 15:27:59.286965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.640 [2024-10-25 15:27:59.287247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:16.640 [2024-10-25 15:27:59.287442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.391 ms 00:23:16.640 [2024-10-25 15:27:59.287458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.640 [2024-10-25 15:27:59.324507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.640 [2024-10-25 15:27:59.324565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:16.640 [2024-10-25 15:27:59.324582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.019 ms 00:23:16.640 [2024-10-25 15:27:59.324610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.640 [2024-10-25 15:27:59.324668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.640 [2024-10-25 15:27:59.324680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:16.640 [2024-10-25 15:27:59.324698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:16.640 [2024-10-25 15:27:59.324707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.640 [2024-10-25 15:27:59.324825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.640 [2024-10-25 15:27:59.324838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:16.640 [2024-10-25 15:27:59.324851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:16.640 [2024-10-25 15:27:59.324860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.640 [2024-10-25 15:27:59.325908] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3930.672 ms, result 0 00:23:16.640 { 00:23:16.640 "name": "ftl0", 00:23:16.640 "uuid": "bc8ab459-38ff-48ba-a6d3-54204f33fed3" 00:23:16.640 } 00:23:16.640 15:27:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:16.640 15:27:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:16.899 15:27:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:16.899 15:27:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:16.899 15:27:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:17.158 /dev/nbd0 00:23:17.158 15:27:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:17.158 15:27:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:17.158 15:27:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:23:17.158 15:27:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:17.158 15:27:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:17.158 15:27:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:17.158 15:27:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:23:17.158 15:27:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:17.158 15:27:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:17.158 15:27:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:17.158 1+0 records in 00:23:17.158 1+0 records out 00:23:17.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442474 s, 9.3 MB/s 00:23:17.158 15:27:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:17.158 15:27:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:23:17.158 15:27:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:17.158 15:27:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:17.158 15:27:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:23:17.158 15:27:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:17.416 [2024-10-25 15:27:59.907060] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:23:17.416 [2024-10-25 15:27:59.907172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78387 ] 00:23:17.416 [2024-10-25 15:28:00.090190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.674 [2024-10-25 15:28:00.203745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.053  [2024-10-25T15:28:02.719Z] Copying: 202/1024 [MB] (202 MBps) [2024-10-25T15:28:03.656Z] Copying: 405/1024 [MB] (203 MBps) [2024-10-25T15:28:04.593Z] Copying: 609/1024 [MB] (203 MBps) [2024-10-25T15:28:05.530Z] Copying: 811/1024 [MB] (202 MBps) [2024-10-25T15:28:05.789Z] Copying: 1000/1024 [MB] (188 MBps) [2024-10-25T15:28:06.726Z] Copying: 1024/1024 [MB] (average 200 MBps) 00:23:23.998 00:23:24.258 15:28:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:26.228 15:28:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:26.228 [2024-10-25 15:28:08.518815] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:23:26.228 [2024-10-25 15:28:08.518941] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78485 ] 00:23:26.228 [2024-10-25 15:28:08.700880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.228 [2024-10-25 15:28:08.807755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.604  [2024-10-25T15:28:11.274Z] Copying: 16/1024 [MB] (16 MBps) [2024-10-25T15:28:12.212Z] Copying: 33/1024 [MB] (17 MBps) [2024-10-25T15:28:13.149Z] Copying: 51/1024 [MB] (17 MBps) [2024-10-25T15:28:14.528Z] Copying: 68/1024 [MB] (17 MBps) [2024-10-25T15:28:15.466Z] Copying: 85/1024 [MB] (16 MBps) [2024-10-25T15:28:16.404Z] Copying: 102/1024 [MB] (17 MBps) [2024-10-25T15:28:17.341Z] Copying: 119/1024 [MB] (17 MBps) [2024-10-25T15:28:18.279Z] Copying: 136/1024 [MB] (17 MBps) [2024-10-25T15:28:19.217Z] Copying: 154/1024 [MB] (17 MBps) [2024-10-25T15:28:20.155Z] Copying: 172/1024 [MB] (17 MBps) [2024-10-25T15:28:21.536Z] Copying: 189/1024 [MB] (17 MBps) [2024-10-25T15:28:22.102Z] Copying: 206/1024 [MB] (17 MBps) [2024-10-25T15:28:23.481Z] Copying: 223/1024 [MB] (16 MBps) [2024-10-25T15:28:24.420Z] Copying: 241/1024 [MB] (17 MBps) [2024-10-25T15:28:25.356Z] Copying: 258/1024 [MB] (17 MBps) [2024-10-25T15:28:26.293Z] Copying: 276/1024 [MB] (17 MBps) [2024-10-25T15:28:27.231Z] Copying: 293/1024 [MB] (17 MBps) [2024-10-25T15:28:28.168Z] Copying: 310/1024 [MB] (17 MBps) [2024-10-25T15:28:29.105Z] Copying: 327/1024 [MB] (17 MBps) [2024-10-25T15:28:30.482Z] Copying: 345/1024 [MB] (17 MBps) [2024-10-25T15:28:31.416Z] Copying: 362/1024 [MB] (17 MBps) [2024-10-25T15:28:32.353Z] Copying: 380/1024 [MB] (17 MBps) [2024-10-25T15:28:33.320Z] Copying: 398/1024 [MB] (17 MBps) [2024-10-25T15:28:34.255Z] Copying: 415/1024 [MB] (17 MBps) [2024-10-25T15:28:35.192Z] Copying: 433/1024 [MB] (17 MBps) [2024-10-25T15:28:36.129Z] Copying: 451/1024 [MB] (17 MBps) [2024-10-25T15:28:37.506Z] Copying: 468/1024 [MB] (17 MBps) [2024-10-25T15:28:38.074Z] Copying: 486/1024 [MB] (17 MBps) [2024-10-25T15:28:39.451Z] Copying: 504/1024 [MB] (17 MBps) [2024-10-25T15:28:40.386Z] Copying: 521/1024 [MB] (17 MBps) [2024-10-25T15:28:41.323Z] Copying: 539/1024 [MB] (17 MBps) [2024-10-25T15:28:42.260Z] Copying: 557/1024 [MB] (18 MBps) [2024-10-25T15:28:43.197Z] Copying: 575/1024 [MB] (17 MBps) [2024-10-25T15:28:44.134Z] Copying: 594/1024 [MB] (18 MBps) [2024-10-25T15:28:45.143Z] Copying: 612/1024 [MB] (18 MBps) [2024-10-25T15:28:46.077Z] Copying: 630/1024 [MB] (18 MBps) [2024-10-25T15:28:47.455Z] Copying: 652/1024 [MB] (21 MBps) [2024-10-25T15:28:48.392Z] Copying: 671/1024 [MB] (18 MBps) [2024-10-25T15:28:49.083Z] Copying: 689/1024 [MB] (18 MBps) [2024-10-25T15:28:50.461Z] Copying: 707/1024 [MB] (18 MBps) [2024-10-25T15:28:51.396Z] Copying: 726/1024 [MB] (18 MBps) [2024-10-25T15:28:52.332Z] Copying: 744/1024 [MB] (18 MBps) [2024-10-25T15:28:53.269Z] Copying: 762/1024 [MB] (17 MBps) [2024-10-25T15:28:54.205Z] Copying: 779/1024 [MB] (17 MBps) [2024-10-25T15:28:55.142Z] Copying: 797/1024 [MB] (17 MBps) [2024-10-25T15:28:56.081Z] Copying: 815/1024 [MB] (18 MBps) [2024-10-25T15:28:57.460Z] Copying: 834/1024 [MB] (18 MBps) [2024-10-25T15:28:58.405Z] Copying: 852/1024 [MB] (17 MBps) [2024-10-25T15:28:59.341Z] Copying: 870/1024 [MB] (18 MBps) [2024-10-25T15:29:00.277Z] Copying: 888/1024 [MB] (17 MBps) [2024-10-25T15:29:01.212Z] Copying: 906/1024 [MB] (18 MBps) [2024-10-25T15:29:02.144Z] Copying: 924/1024 [MB] (18 MBps) [2024-10-25T15:29:03.079Z] Copying: 942/1024 [MB] (17 MBps) [2024-10-25T15:29:04.456Z] Copying: 961/1024 [MB] (18 MBps) [2024-10-25T15:29:05.392Z] Copying: 980/1024 [MB] (18 MBps) [2024-10-25T15:29:06.329Z] Copying: 999/1024 [MB] (19 MBps) [2024-10-25T15:29:06.588Z] Copying: 1018/1024 [MB] (18 MBps) [2024-10-25T15:29:07.527Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:24:24.799 00:24:24.799 15:29:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:24.799 15:29:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:25.058 15:29:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:25.318 [2024-10-25 15:29:07.871323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.318 [2024-10-25 15:29:07.871378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:25.318 [2024-10-25 15:29:07.871395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:25.318 [2024-10-25 15:29:07.871408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.318 [2024-10-25 15:29:07.871434] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:25.318 [2024-10-25 15:29:07.875620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.318 [2024-10-25 15:29:07.875659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:25.318 [2024-10-25 15:29:07.875675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.170 ms 00:24:25.318 [2024-10-25 15:29:07.875686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.318 [2024-10-25 15:29:07.877779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.318 [2024-10-25 15:29:07.877819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:25.318 [2024-10-25 15:29:07.877835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.057 ms 00:24:25.318 [2024-10-25 15:29:07.877846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.318 [2024-10-25 15:29:07.895441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.318 [2024-10-25 15:29:07.895482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:25.318 [2024-10-25 15:29:07.895499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.590 ms 00:24:25.318 [2024-10-25 15:29:07.895513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.318 [2024-10-25 15:29:07.900467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.318 [2024-10-25 15:29:07.900630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:25.318 [2024-10-25 15:29:07.900656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.918 ms 00:24:25.318 [2024-10-25 15:29:07.900667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.318 [2024-10-25 15:29:07.936508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.318 [2024-10-25 15:29:07.936544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:25.318 [2024-10-25 15:29:07.936559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.817 ms 00:24:25.318 [2024-10-25 15:29:07.936585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.318 [2024-10-25 15:29:07.958400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.318 [2024-10-25 15:29:07.958439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:25.318 [2024-10-25 15:29:07.958456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.803 ms 00:24:25.318 [2024-10-25 15:29:07.958466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.318 [2024-10-25 15:29:07.958614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.318 [2024-10-25 15:29:07.958628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:25.318 [2024-10-25 15:29:07.958653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:24:25.318 [2024-10-25 15:29:07.958663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.318 [2024-10-25 15:29:07.994979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.318 [2024-10-25 15:29:07.995019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:25.318 [2024-10-25 15:29:07.995036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.351 ms 00:24:25.318 [2024-10-25 15:29:07.995046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.318 [2024-10-25 15:29:08.031108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.318 [2024-10-25 15:29:08.031146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:25.318 [2024-10-25 15:29:08.031161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.076 ms 00:24:25.318 [2024-10-25 15:29:08.031171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.579 [2024-10-25 15:29:08.065486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.579 [2024-10-25 15:29:08.065523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:25.579 [2024-10-25 15:29:08.065554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.313 ms 00:24:25.579 [2024-10-25 15:29:08.065563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.579 [2024-10-25 15:29:08.100136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.579 [2024-10-25 15:29:08.100197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:25.579 [2024-10-25 15:29:08.100214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.532 ms 00:24:25.579 [2024-10-25 15:29:08.100240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.579 [2024-10-25 15:29:08.100281] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:25.579 [2024-10-25 15:29:08.100297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:25.579 [2024-10-25 15:29:08.100834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.100847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.100857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.100870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.100881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.100895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.100915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.100929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.100940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.100953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.100963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.100977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.100987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:25.580 [2024-10-25 15:29:08.101555] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:25.580 [2024-10-25 15:29:08.101567] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc8ab459-38ff-48ba-a6d3-54204f33fed3 00:24:25.580 [2024-10-25 15:29:08.101578] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:25.580 [2024-10-25 15:29:08.101593] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:25.580 [2024-10-25 15:29:08.101602] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:25.580 [2024-10-25 15:29:08.101614] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:25.580 [2024-10-25 15:29:08.101623] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:25.580 [2024-10-25 15:29:08.101640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:25.580 [2024-10-25 15:29:08.101650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:25.580 [2024-10-25 15:29:08.101661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:25.580 [2024-10-25 15:29:08.101670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:25.580 [2024-10-25 15:29:08.101682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.580 [2024-10-25 15:29:08.101692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:25.580 [2024-10-25 15:29:08.101705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.406 ms 00:24:25.580 [2024-10-25 15:29:08.101715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.580 [2024-10-25 15:29:08.120938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.580 [2024-10-25 15:29:08.120972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:25.580 [2024-10-25 15:29:08.121002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.199 ms 00:24:25.580 [2024-10-25 15:29:08.121014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.580 [2024-10-25 15:29:08.121565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:25.580 [2024-10-25 15:29:08.121577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:25.580 [2024-10-25 15:29:08.121590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:24:25.580 [2024-10-25 15:29:08.121599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.580 [2024-10-25 15:29:08.186457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.580 [2024-10-25 15:29:08.186497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:25.580 [2024-10-25 15:29:08.186513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.580 [2024-10-25 15:29:08.186527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.580 [2024-10-25 15:29:08.186584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.580 [2024-10-25 15:29:08.186595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:25.580 [2024-10-25 15:29:08.186608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.580 [2024-10-25 15:29:08.186618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.580 [2024-10-25 15:29:08.186715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.580 [2024-10-25 15:29:08.186729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:25.580 [2024-10-25 15:29:08.186742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.580 [2024-10-25 15:29:08.186752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.580 [2024-10-25 15:29:08.186780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.580 [2024-10-25 15:29:08.186790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:25.580 [2024-10-25 15:29:08.186804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.580 [2024-10-25 15:29:08.186814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.839 [2024-10-25 15:29:08.306851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.839 [2024-10-25 15:29:08.306914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:25.839 [2024-10-25 15:29:08.306932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.840 [2024-10-25 15:29:08.306947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.840 [2024-10-25 15:29:08.405371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.840 [2024-10-25 15:29:08.405428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:25.840 [2024-10-25 15:29:08.405444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.840 [2024-10-25 15:29:08.405470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.840 [2024-10-25 15:29:08.405577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.840 [2024-10-25 15:29:08.405589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:25.840 [2024-10-25 15:29:08.405602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.840 [2024-10-25 15:29:08.405612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.840 [2024-10-25 15:29:08.405669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.840 [2024-10-25 15:29:08.405681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:25.840 [2024-10-25 15:29:08.405694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.840 [2024-10-25 15:29:08.405704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.840 [2024-10-25 15:29:08.405842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.840 [2024-10-25 15:29:08.405856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:25.840 [2024-10-25 15:29:08.405869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.840 [2024-10-25 15:29:08.405879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.840 [2024-10-25 15:29:08.405919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.840 [2024-10-25 15:29:08.405935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:25.840 [2024-10-25 15:29:08.405948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.840 [2024-10-25 15:29:08.405957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.840 [2024-10-25 15:29:08.405999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.840 [2024-10-25 15:29:08.406010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:25.840 [2024-10-25 15:29:08.406023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.840 [2024-10-25 15:29:08.406033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.840 [2024-10-25 15:29:08.406086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:25.840 [2024-10-25 15:29:08.406098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:25.840 [2024-10-25 15:29:08.406111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:25.840 [2024-10-25 15:29:08.406121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:25.840 [2024-10-25 15:29:08.406284] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 535.763 ms, result 0 00:24:25.840 true 00:24:25.840 15:29:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78245 00:24:25.840 15:29:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78245 00:24:25.840 15:29:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:25.840 [2024-10-25 15:29:08.531540] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:24:25.840 [2024-10-25 15:29:08.531682] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79093 ] 00:24:26.099 [2024-10-25 15:29:08.710226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.099 [2024-10-25 15:29:08.822446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.482  [2024-10-25T15:29:11.149Z] Copying: 206/1024 [MB] (206 MBps) [2024-10-25T15:29:12.530Z] Copying: 413/1024 [MB] (206 MBps) [2024-10-25T15:29:13.468Z] Copying: 623/1024 [MB] (209 MBps) [2024-10-25T15:29:14.454Z] Copying: 826/1024 [MB] (203 MBps) [2024-10-25T15:29:15.388Z] Copying: 1024/1024 [MB] (average 206 MBps) 00:24:32.660 00:24:32.660 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78245 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:32.660 15:29:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:32.660 [2024-10-25 15:29:15.293129] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:24:32.660 [2024-10-25 15:29:15.293262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79163 ] 00:24:32.918 [2024-10-25 15:29:15.477024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.918 [2024-10-25 15:29:15.583557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.485 [2024-10-25 15:29:15.933886] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:33.485 [2024-10-25 15:29:15.933954] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:33.485 [2024-10-25 15:29:15.999371] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:33.485 [2024-10-25 15:29:15.999669] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:33.485 [2024-10-25 15:29:15.999937] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:33.745 [2024-10-25 15:29:16.278461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.745 [2024-10-25 15:29:16.278506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:33.745 [2024-10-25 15:29:16.278520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:33.745 [2024-10-25 15:29:16.278530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.745 [2024-10-25 15:29:16.278595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.745 [2024-10-25 15:29:16.278607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:33.745 [2024-10-25 15:29:16.278617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:33.745 [2024-10-25 15:29:16.278627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.745 [2024-10-25 15:29:16.278648] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:33.745 [2024-10-25 15:29:16.279605] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:33.745 [2024-10-25 15:29:16.279646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.745 [2024-10-25 15:29:16.279657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:33.745 [2024-10-25 15:29:16.279669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:24:33.745 [2024-10-25 15:29:16.279679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.745 [2024-10-25 15:29:16.281199] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:33.745 [2024-10-25 15:29:16.299285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.745 [2024-10-25 15:29:16.299332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:33.745 [2024-10-25 15:29:16.299362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.116 ms 00:24:33.745 [2024-10-25 15:29:16.299373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.745 [2024-10-25 15:29:16.299431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.745 [2024-10-25 15:29:16.299444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:33.745 [2024-10-25 15:29:16.299455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:33.745 [2024-10-25 15:29:16.299465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.745 [2024-10-25 15:29:16.306265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.745 [2024-10-25 15:29:16.306297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:33.745 [2024-10-25 15:29:16.306309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.739 ms 00:24:33.745 [2024-10-25 15:29:16.306319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.745 [2024-10-25 15:29:16.306401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.745 [2024-10-25 15:29:16.306415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:33.745 [2024-10-25 15:29:16.306425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:24:33.745 [2024-10-25 15:29:16.306435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.745 [2024-10-25 15:29:16.306478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.745 [2024-10-25 15:29:16.306493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:33.745 [2024-10-25 15:29:16.306504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:33.745 [2024-10-25 15:29:16.306513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.745 [2024-10-25 15:29:16.306537] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:33.745 [2024-10-25 15:29:16.311193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.745 [2024-10-25 15:29:16.311227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:33.745 [2024-10-25 15:29:16.311255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.669 ms 00:24:33.745 [2024-10-25 15:29:16.311265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.745 [2024-10-25 15:29:16.311295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.745 [2024-10-25 15:29:16.311305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:33.745 [2024-10-25 15:29:16.311316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:33.745 [2024-10-25 15:29:16.311325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.745 [2024-10-25 15:29:16.311376] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:33.745 [2024-10-25 15:29:16.311406] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:33.745 [2024-10-25 15:29:16.311440] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:33.745 [2024-10-25 15:29:16.311457] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:33.745 [2024-10-25 15:29:16.311547] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:33.745 [2024-10-25 15:29:16.311578] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:33.745 [2024-10-25 15:29:16.311598] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:33.745 [2024-10-25 15:29:16.311617] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:33.745 [2024-10-25 15:29:16.311636] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:33.745 [2024-10-25 15:29:16.311650] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:33.745 [2024-10-25 15:29:16.311662] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:33.745 [2024-10-25 15:29:16.311678] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:33.745 [2024-10-25 15:29:16.311692] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:33.745 [2024-10-25 15:29:16.311703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.745 [2024-10-25 15:29:16.311713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:33.745 [2024-10-25 15:29:16.311724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:24:33.745 [2024-10-25 15:29:16.311734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.745 [2024-10-25 15:29:16.311811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.745 [2024-10-25 15:29:16.311826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:33.745 [2024-10-25 15:29:16.311843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:33.745 [2024-10-25 15:29:16.311855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.745 [2024-10-25 15:29:16.311956] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:33.745 [2024-10-25 15:29:16.311980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:33.745 [2024-10-25 15:29:16.311995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:33.745 [2024-10-25 15:29:16.312008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.745 [2024-10-25 15:29:16.312021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:33.746 [2024-10-25 15:29:16.312033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:33.746 [2024-10-25 15:29:16.312045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:33.746 [2024-10-25 15:29:16.312057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:33.746 [2024-10-25 15:29:16.312072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:33.746 [2024-10-25 15:29:16.312087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:33.746 [2024-10-25 15:29:16.312097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:33.746 [2024-10-25 15:29:16.312117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:33.746 [2024-10-25 15:29:16.312127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:33.746 [2024-10-25 15:29:16.312141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:33.746 [2024-10-25 15:29:16.312154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:33.746 [2024-10-25 15:29:16.312163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.746 [2024-10-25 15:29:16.312173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:33.746 [2024-10-25 15:29:16.312182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:33.746 [2024-10-25 15:29:16.312204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.746 [2024-10-25 15:29:16.312214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:33.746 [2024-10-25 15:29:16.312223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:33.746 [2024-10-25 15:29:16.312232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.746 [2024-10-25 15:29:16.312242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:33.746 [2024-10-25 15:29:16.312251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:33.746 [2024-10-25 15:29:16.312260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.746 [2024-10-25 15:29:16.312270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:33.746 [2024-10-25 15:29:16.312279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:33.746 [2024-10-25 15:29:16.312287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.746 [2024-10-25 15:29:16.312297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:33.746 [2024-10-25 15:29:16.312306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:33.746 [2024-10-25 15:29:16.312320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.746 [2024-10-25 15:29:16.312330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:33.746 [2024-10-25 15:29:16.312339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:33.746 [2024-10-25 15:29:16.312348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:33.746 [2024-10-25 15:29:16.312358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:33.746 [2024-10-25 15:29:16.312374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:33.746 [2024-10-25 15:29:16.312390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:33.746 [2024-10-25 15:29:16.312402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:33.746 [2024-10-25 15:29:16.312415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:33.746 [2024-10-25 15:29:16.312426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.746 [2024-10-25 15:29:16.312438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:33.746 [2024-10-25 15:29:16.312450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:33.746 [2024-10-25 15:29:16.312462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.746 [2024-10-25 15:29:16.312477] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:33.746 [2024-10-25 15:29:16.312491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:33.746 [2024-10-25 15:29:16.312509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:33.746 [2024-10-25 15:29:16.312523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.746 [2024-10-25 15:29:16.312533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:33.746 [2024-10-25 15:29:16.312543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:33.746 [2024-10-25 15:29:16.312558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:33.746 [2024-10-25 15:29:16.312569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:33.746 [2024-10-25 15:29:16.312578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:33.746 [2024-10-25 15:29:16.312588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:33.746 [2024-10-25 15:29:16.312599] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:33.746 [2024-10-25 15:29:16.312611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:33.746 [2024-10-25 15:29:16.312622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:33.746 [2024-10-25 15:29:16.312633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:33.746 [2024-10-25 15:29:16.312644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:33.746 [2024-10-25 15:29:16.312654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:33.746 [2024-10-25 15:29:16.312664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:33.746 [2024-10-25 15:29:16.312674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:33.746 [2024-10-25 15:29:16.312686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:33.746 [2024-10-25 15:29:16.312702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:33.746 [2024-10-25 15:29:16.312712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:33.746 [2024-10-25 15:29:16.312722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:33.746 [2024-10-25 15:29:16.312734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:33.746 [2024-10-25 15:29:16.312752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:33.746 [2024-10-25 15:29:16.312768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:33.746 [2024-10-25 15:29:16.312783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:33.746 [2024-10-25 15:29:16.312796] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:33.746 [2024-10-25 15:29:16.312810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:33.746 [2024-10-25 15:29:16.312825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:33.746 [2024-10-25 15:29:16.312838] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:33.746 [2024-10-25 15:29:16.312855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:33.746 [2024-10-25 15:29:16.312868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:33.746 [2024-10-25 15:29:16.312881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.746 [2024-10-25 15:29:16.312892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:33.746 [2024-10-25 15:29:16.312902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:24:33.746 [2024-10-25 15:29:16.312912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.746 [2024-10-25 15:29:16.351006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.746 [2024-10-25 15:29:16.351050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:33.746 [2024-10-25 15:29:16.351064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.100 ms 00:24:33.746 [2024-10-25 15:29:16.351075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.746 [2024-10-25 15:29:16.351156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.746 [2024-10-25 15:29:16.351199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:33.746 [2024-10-25 15:29:16.351210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:33.746 [2024-10-25 15:29:16.351220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.746 [2024-10-25 15:29:16.406909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.746 [2024-10-25 15:29:16.406956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:33.746 [2024-10-25 15:29:16.406987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.721 ms 00:24:33.746 [2024-10-25 15:29:16.407007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.746 [2024-10-25 15:29:16.407058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.746 [2024-10-25 15:29:16.407068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:33.746 [2024-10-25 15:29:16.407079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:33.746 [2024-10-25 15:29:16.407089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.746 [2024-10-25 15:29:16.407626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.746 [2024-10-25 15:29:16.407644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:33.746 [2024-10-25 15:29:16.407655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:24:33.746 [2024-10-25 15:29:16.407667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.746 [2024-10-25 15:29:16.407821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.746 [2024-10-25 15:29:16.407836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:33.746 [2024-10-25 15:29:16.407847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:24:33.746 [2024-10-25 15:29:16.407857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.746 [2024-10-25 15:29:16.426221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.746 [2024-10-25 15:29:16.426258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:33.746 [2024-10-25 15:29:16.426271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.373 ms 00:24:33.746 [2024-10-25 15:29:16.426282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.746 [2024-10-25 15:29:16.445212] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:33.746 [2024-10-25 15:29:16.445255] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:33.746 [2024-10-25 15:29:16.445270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.746 [2024-10-25 15:29:16.445281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:33.746 [2024-10-25 15:29:16.445309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.903 ms 00:24:33.747 [2024-10-25 15:29:16.445319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.006 [2024-10-25 15:29:16.473978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.006 [2024-10-25 15:29:16.474023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:34.006 [2024-10-25 15:29:16.474050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.649 ms 00:24:34.006 [2024-10-25 15:29:16.474061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.006 [2024-10-25 15:29:16.492278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.006 [2024-10-25 15:29:16.492319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:34.006 [2024-10-25 15:29:16.492333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.209 ms 00:24:34.006 [2024-10-25 15:29:16.492343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.006 [2024-10-25 15:29:16.510211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.006 [2024-10-25 15:29:16.510249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:34.006 [2024-10-25 15:29:16.510277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.856 ms 00:24:34.006 [2024-10-25 15:29:16.510287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.006 [2024-10-25 15:29:16.511107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.006 [2024-10-25 15:29:16.511137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:34.006 [2024-10-25 15:29:16.511150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:24:34.006 [2024-10-25 15:29:16.511161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.006 [2024-10-25 15:29:16.594886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.006 [2024-10-25 15:29:16.594950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:34.006 [2024-10-25 15:29:16.594984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.823 ms 00:24:34.006 [2024-10-25 15:29:16.595002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.006 [2024-10-25 15:29:16.606778] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:34.006 [2024-10-25 15:29:16.610084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.006 [2024-10-25 15:29:16.610115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:34.006 [2024-10-25 15:29:16.610145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.032 ms 00:24:34.006 [2024-10-25 15:29:16.610157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.007 [2024-10-25 15:29:16.610271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.007 [2024-10-25 15:29:16.610288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:34.007 [2024-10-25 15:29:16.610300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:34.007 [2024-10-25 15:29:16.610310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.007 [2024-10-25 15:29:16.610397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.007 [2024-10-25 15:29:16.610409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:34.007 [2024-10-25 15:29:16.610421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:34.007 [2024-10-25 15:29:16.610451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.007 [2024-10-25 15:29:16.610485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.007 [2024-10-25 15:29:16.610498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:34.007 [2024-10-25 15:29:16.610518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:34.007 [2024-10-25 15:29:16.610534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.007 [2024-10-25 15:29:16.610577] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:34.007 [2024-10-25 15:29:16.610592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.007 [2024-10-25 15:29:16.610602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:34.007 [2024-10-25 15:29:16.610612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:34.007 [2024-10-25 15:29:16.610622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.007 [2024-10-25 15:29:16.646950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.007 [2024-10-25 15:29:16.647021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:34.007 [2024-10-25 15:29:16.647038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.358 ms 00:24:34.007 [2024-10-25 15:29:16.647048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.007 [2024-10-25 15:29:16.647146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.007 [2024-10-25 15:29:16.647159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:34.007 [2024-10-25 15:29:16.647171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:34.007 [2024-10-25 15:29:16.647191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.007 [2024-10-25 15:29:16.648432] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 370.083 ms, result 0 00:24:34.944  [2024-10-25T15:29:19.051Z] Copying: 28/1024 [MB] (28 MBps) [2024-10-25T15:29:19.990Z] Copying: 55/1024 [MB] (26 MBps) [2024-10-25T15:29:20.925Z] Copying: 81/1024 [MB] (26 MBps) [2024-10-25T15:29:21.863Z] Copying: 107/1024 [MB] (26 MBps) [2024-10-25T15:29:22.801Z] Copying: 133/1024 [MB] (25 MBps) [2024-10-25T15:29:23.738Z] Copying: 158/1024 [MB] (25 MBps) [2024-10-25T15:29:24.674Z] Copying: 185/1024 [MB] (26 MBps) [2024-10-25T15:29:26.051Z] Copying: 211/1024 [MB] (26 MBps) [2024-10-25T15:29:26.983Z] Copying: 241/1024 [MB] (29 MBps) [2024-10-25T15:29:27.918Z] Copying: 276/1024 [MB] (35 MBps) [2024-10-25T15:29:28.852Z] Copying: 308/1024 [MB] (31 MBps) [2024-10-25T15:29:29.823Z] Copying: 335/1024 [MB] (27 MBps) [2024-10-25T15:29:30.756Z] Copying: 361/1024 [MB] (26 MBps) [2024-10-25T15:29:31.686Z] Copying: 389/1024 [MB] (27 MBps) [2024-10-25T15:29:33.058Z] Copying: 418/1024 [MB] (29 MBps) [2024-10-25T15:29:33.990Z] Copying: 452/1024 [MB] (33 MBps) [2024-10-25T15:29:34.926Z] Copying: 486/1024 [MB] (34 MBps) [2024-10-25T15:29:35.866Z] Copying: 517/1024 [MB] (30 MBps) [2024-10-25T15:29:36.808Z] Copying: 544/1024 [MB] (27 MBps) [2024-10-25T15:29:37.746Z] Copying: 570/1024 [MB] (25 MBps) [2024-10-25T15:29:38.682Z] Copying: 597/1024 [MB] (26 MBps) [2024-10-25T15:29:40.060Z] Copying: 622/1024 [MB] (25 MBps) [2024-10-25T15:29:40.628Z] Copying: 649/1024 [MB] (26 MBps) [2024-10-25T15:29:42.017Z] Copying: 675/1024 [MB] (25 MBps) [2024-10-25T15:29:42.962Z] Copying: 700/1024 [MB] (25 MBps) [2024-10-25T15:29:43.898Z] Copying: 726/1024 [MB] (25 MBps) [2024-10-25T15:29:44.833Z] Copying: 752/1024 [MB] (25 MBps) [2024-10-25T15:29:45.803Z] Copying: 777/1024 [MB] (25 MBps) [2024-10-25T15:29:46.738Z] Copying: 803/1024 [MB] (26 MBps) [2024-10-25T15:29:47.671Z] Copying: 829/1024 [MB] (26 MBps) [2024-10-25T15:29:49.045Z] Copying: 856/1024 [MB] (26 MBps) [2024-10-25T15:29:49.613Z] Copying: 882/1024 [MB] (26 MBps) [2024-10-25T15:29:50.987Z] Copying: 908/1024 [MB] (26 MBps) [2024-10-25T15:29:51.922Z] Copying: 935/1024 [MB] (26 MBps) [2024-10-25T15:29:52.857Z] Copying: 960/1024 [MB] (25 MBps) [2024-10-25T15:29:53.796Z] Copying: 986/1024 [MB] (25 MBps) [2024-10-25T15:29:54.741Z] Copying: 1012/1024 [MB] (26 MBps) [2024-10-25T15:29:54.999Z] Copying: 1023/1024 [MB] (11 MBps) [2024-10-25T15:29:54.999Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-10-25 15:29:54.756249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.271 [2024-10-25 15:29:54.756475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:12.271 [2024-10-25 15:29:54.756563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:12.271 [2024-10-25 15:29:54.756601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.271 [2024-10-25 15:29:54.757711] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:12.271 [2024-10-25 15:29:54.763573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.271 [2024-10-25 15:29:54.763610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:12.271 [2024-10-25 15:29:54.763623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.734 ms 00:25:12.271 [2024-10-25 15:29:54.763634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.271 [2024-10-25 15:29:54.773482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.271 [2024-10-25 15:29:54.773524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:12.271 [2024-10-25 15:29:54.773553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.114 ms 00:25:12.271 [2024-10-25 15:29:54.773563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.271 [2024-10-25 15:29:54.796567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.271 [2024-10-25 15:29:54.796603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:12.271 [2024-10-25 15:29:54.796617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.022 ms 00:25:12.271 [2024-10-25 15:29:54.796629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.271 [2024-10-25 15:29:54.801607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.271 [2024-10-25 15:29:54.801637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:12.271 [2024-10-25 15:29:54.801671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.942 ms 00:25:12.271 [2024-10-25 15:29:54.801682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.271 [2024-10-25 15:29:54.837988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.271 [2024-10-25 15:29:54.838022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:12.271 [2024-10-25 15:29:54.838052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.322 ms 00:25:12.271 [2024-10-25 15:29:54.838062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.271 [2024-10-25 15:29:54.858992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.271 [2024-10-25 15:29:54.859032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:12.271 [2024-10-25 15:29:54.859046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.928 ms 00:25:12.271 [2024-10-25 15:29:54.859072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.271 [2024-10-25 15:29:54.973635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.271 [2024-10-25 15:29:54.973681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:12.271 [2024-10-25 15:29:54.973695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.710 ms 00:25:12.271 [2024-10-25 15:29:54.973712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.531 [2024-10-25 15:29:55.009518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.531 [2024-10-25 15:29:55.009548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:12.531 [2024-10-25 15:29:55.009560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.847 ms 00:25:12.532 [2024-10-25 15:29:55.009586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.532 [2024-10-25 15:29:55.044828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.532 [2024-10-25 15:29:55.044858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:12.532 [2024-10-25 15:29:55.044870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.263 ms 00:25:12.532 [2024-10-25 15:29:55.044895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.532 [2024-10-25 15:29:55.080095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.532 [2024-10-25 15:29:55.080127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:12.532 [2024-10-25 15:29:55.080139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.221 ms 00:25:12.532 [2024-10-25 15:29:55.080149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.532 [2024-10-25 15:29:55.115399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.532 [2024-10-25 15:29:55.115447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:12.532 [2024-10-25 15:29:55.115460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.229 ms 00:25:12.532 [2024-10-25 15:29:55.115469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.532 [2024-10-25 15:29:55.115504] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:12.532 [2024-10-25 15:29:55.115519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 106240 / 261120 wr_cnt: 1 state: open 00:25:12.532 [2024-10-25 15:29:55.115532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.115993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:12.532 [2024-10-25 15:29:55.116293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:12.533 [2024-10-25 15:29:55.116614] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:12.533 [2024-10-25 15:29:55.116624] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc8ab459-38ff-48ba-a6d3-54204f33fed3 00:25:12.533 [2024-10-25 15:29:55.116634] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 106240 00:25:12.533 [2024-10-25 15:29:55.116644] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 107200 00:25:12.533 [2024-10-25 15:29:55.116668] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 106240 00:25:12.533 [2024-10-25 15:29:55.116678] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0090 00:25:12.533 [2024-10-25 15:29:55.116687] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:12.533 [2024-10-25 15:29:55.116698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:12.533 [2024-10-25 15:29:55.116708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:12.533 [2024-10-25 15:29:55.116716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:12.533 [2024-10-25 15:29:55.116725] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:12.533 [2024-10-25 15:29:55.116734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.533 [2024-10-25 15:29:55.116744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:12.533 [2024-10-25 15:29:55.116754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.233 ms 00:25:12.533 [2024-10-25 15:29:55.116764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.533 [2024-10-25 15:29:55.135187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.533 [2024-10-25 15:29:55.135245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:12.533 [2024-10-25 15:29:55.135265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.419 ms 00:25:12.533 [2024-10-25 15:29:55.135282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.533 [2024-10-25 15:29:55.135827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:12.533 [2024-10-25 15:29:55.135859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:12.533 [2024-10-25 15:29:55.135881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:25:12.533 [2024-10-25 15:29:55.135899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.533 [2024-10-25 15:29:55.186603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.533 [2024-10-25 15:29:55.186638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:12.533 [2024-10-25 15:29:55.186653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.533 [2024-10-25 15:29:55.186663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.533 [2024-10-25 15:29:55.186724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.533 [2024-10-25 15:29:55.186735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:12.533 [2024-10-25 15:29:55.186746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.533 [2024-10-25 15:29:55.186755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.533 [2024-10-25 15:29:55.186838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.533 [2024-10-25 15:29:55.186851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:12.533 [2024-10-25 15:29:55.186862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.533 [2024-10-25 15:29:55.186872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.533 [2024-10-25 15:29:55.186889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.533 [2024-10-25 15:29:55.186900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:12.533 [2024-10-25 15:29:55.186910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.533 [2024-10-25 15:29:55.186920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.793 [2024-10-25 15:29:55.311340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.793 [2024-10-25 15:29:55.311387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:12.793 [2024-10-25 15:29:55.311401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.793 [2024-10-25 15:29:55.311412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.793 [2024-10-25 15:29:55.411288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.793 [2024-10-25 15:29:55.411329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:12.793 [2024-10-25 15:29:55.411344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.793 [2024-10-25 15:29:55.411355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.793 [2024-10-25 15:29:55.411444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.793 [2024-10-25 15:29:55.411462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:12.793 [2024-10-25 15:29:55.411473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.793 [2024-10-25 15:29:55.411482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.793 [2024-10-25 15:29:55.411527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.793 [2024-10-25 15:29:55.411538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:12.793 [2024-10-25 15:29:55.411549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.793 [2024-10-25 15:29:55.411559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.793 [2024-10-25 15:29:55.411663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.793 [2024-10-25 15:29:55.411681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:12.793 [2024-10-25 15:29:55.411692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.793 [2024-10-25 15:29:55.411701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.793 [2024-10-25 15:29:55.411735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.793 [2024-10-25 15:29:55.411747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:12.793 [2024-10-25 15:29:55.411757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.793 [2024-10-25 15:29:55.411767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.793 [2024-10-25 15:29:55.411802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.793 [2024-10-25 15:29:55.411812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:12.793 [2024-10-25 15:29:55.411826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.793 [2024-10-25 15:29:55.411836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.793 [2024-10-25 15:29:55.411879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.793 [2024-10-25 15:29:55.411890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:12.793 [2024-10-25 15:29:55.411900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.793 [2024-10-25 15:29:55.411910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.793 [2024-10-25 15:29:55.412021] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 659.031 ms, result 0 00:25:14.696 00:25:14.696 00:25:14.696 15:29:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:16.601 15:29:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:16.601 [2024-10-25 15:29:59.013401] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:25:16.601 [2024-10-25 15:29:59.013534] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79609 ] 00:25:16.601 [2024-10-25 15:29:59.196309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.601 [2024-10-25 15:29:59.307758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.174 [2024-10-25 15:29:59.649033] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:17.174 [2024-10-25 15:29:59.649117] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:17.174 [2024-10-25 15:29:59.809139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.174 [2024-10-25 15:29:59.809210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:17.174 [2024-10-25 15:29:59.809239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:17.174 [2024-10-25 15:29:59.809257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.174 [2024-10-25 15:29:59.809325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.174 [2024-10-25 15:29:59.809362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:17.174 [2024-10-25 15:29:59.809384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:17.174 [2024-10-25 15:29:59.809402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.174 [2024-10-25 15:29:59.809436] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:17.174 [2024-10-25 15:29:59.810391] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:17.174 [2024-10-25 15:29:59.810437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.174 [2024-10-25 15:29:59.810456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:17.174 [2024-10-25 15:29:59.810475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:25:17.174 [2024-10-25 15:29:59.810492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.174 [2024-10-25 15:29:59.812034] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:17.174 [2024-10-25 15:29:59.830996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.174 [2024-10-25 15:29:59.831050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:17.174 [2024-10-25 15:29:59.831073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.994 ms 00:25:17.174 [2024-10-25 15:29:59.831091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.174 [2024-10-25 15:29:59.831226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.174 [2024-10-25 15:29:59.831250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:17.174 [2024-10-25 15:29:59.831265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:17.174 [2024-10-25 15:29:59.831282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.174 [2024-10-25 15:29:59.837948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.174 [2024-10-25 15:29:59.837988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:17.174 [2024-10-25 15:29:59.838009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.574 ms 00:25:17.174 [2024-10-25 15:29:59.838025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.174 [2024-10-25 15:29:59.838148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.174 [2024-10-25 15:29:59.838167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:17.174 [2024-10-25 15:29:59.838178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:25:17.174 [2024-10-25 15:29:59.838205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.174 [2024-10-25 15:29:59.838270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.174 [2024-10-25 15:29:59.838299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:17.174 [2024-10-25 15:29:59.838319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:17.174 [2024-10-25 15:29:59.838338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.174 [2024-10-25 15:29:59.838379] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:17.174 [2024-10-25 15:29:59.843121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.174 [2024-10-25 15:29:59.843163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:17.174 [2024-10-25 15:29:59.843208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.757 ms 00:25:17.174 [2024-10-25 15:29:59.843233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.175 [2024-10-25 15:29:59.843281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.175 [2024-10-25 15:29:59.843302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:17.175 [2024-10-25 15:29:59.843321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:17.175 [2024-10-25 15:29:59.843338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.175 [2024-10-25 15:29:59.843413] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:17.175 [2024-10-25 15:29:59.843453] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:17.175 [2024-10-25 15:29:59.843505] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:17.175 [2024-10-25 15:29:59.843542] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:17.175 [2024-10-25 15:29:59.843659] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:17.175 [2024-10-25 15:29:59.843686] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:17.175 [2024-10-25 15:29:59.843709] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:17.175 [2024-10-25 15:29:59.843729] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:17.175 [2024-10-25 15:29:59.843746] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:17.175 [2024-10-25 15:29:59.843762] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:17.175 [2024-10-25 15:29:59.843777] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:17.175 [2024-10-25 15:29:59.843802] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:17.175 [2024-10-25 15:29:59.843817] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:17.175 [2024-10-25 15:29:59.843839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.175 [2024-10-25 15:29:59.843853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:17.175 [2024-10-25 15:29:59.843871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:25:17.175 [2024-10-25 15:29:59.843888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.175 [2024-10-25 15:29:59.843993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.175 [2024-10-25 15:29:59.844020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:17.175 [2024-10-25 15:29:59.844040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:25:17.175 [2024-10-25 15:29:59.844058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.175 [2024-10-25 15:29:59.844198] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:17.175 [2024-10-25 15:29:59.844227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:17.175 [2024-10-25 15:29:59.844247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:17.175 [2024-10-25 15:29:59.844265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.175 [2024-10-25 15:29:59.844284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:17.175 [2024-10-25 15:29:59.844301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:17.175 [2024-10-25 15:29:59.844317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:17.175 [2024-10-25 15:29:59.844333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:17.175 [2024-10-25 15:29:59.844350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:17.175 [2024-10-25 15:29:59.844367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:17.175 [2024-10-25 15:29:59.844385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:17.175 [2024-10-25 15:29:59.844405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:17.175 [2024-10-25 15:29:59.844422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:17.175 [2024-10-25 15:29:59.844439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:17.175 [2024-10-25 15:29:59.844458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:17.175 [2024-10-25 15:29:59.844489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.175 [2024-10-25 15:29:59.844506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:17.175 [2024-10-25 15:29:59.844526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:17.175 [2024-10-25 15:29:59.844543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.175 [2024-10-25 15:29:59.844562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:17.175 [2024-10-25 15:29:59.844581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:17.175 [2024-10-25 15:29:59.844599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:17.175 [2024-10-25 15:29:59.844616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:17.175 [2024-10-25 15:29:59.844634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:17.175 [2024-10-25 15:29:59.844652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:17.175 [2024-10-25 15:29:59.844669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:17.175 [2024-10-25 15:29:59.844685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:17.175 [2024-10-25 15:29:59.844702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:17.175 [2024-10-25 15:29:59.844720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:17.175 [2024-10-25 15:29:59.844736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:17.175 [2024-10-25 15:29:59.844755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:17.175 [2024-10-25 15:29:59.844773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:17.175 [2024-10-25 15:29:59.844792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:17.175 [2024-10-25 15:29:59.844810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:17.175 [2024-10-25 15:29:59.844828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:17.175 [2024-10-25 15:29:59.844846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:17.175 [2024-10-25 15:29:59.844863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:17.175 [2024-10-25 15:29:59.844879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:17.175 [2024-10-25 15:29:59.844896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:17.175 [2024-10-25 15:29:59.844913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.175 [2024-10-25 15:29:59.844931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:17.175 [2024-10-25 15:29:59.844949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:17.175 [2024-10-25 15:29:59.844967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.175 [2024-10-25 15:29:59.844983] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:17.175 [2024-10-25 15:29:59.845001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:17.175 [2024-10-25 15:29:59.845019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:17.175 [2024-10-25 15:29:59.845039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.175 [2024-10-25 15:29:59.845058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:17.175 [2024-10-25 15:29:59.845077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:17.175 [2024-10-25 15:29:59.845094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:17.175 [2024-10-25 15:29:59.845110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:17.175 [2024-10-25 15:29:59.845128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:17.175 [2024-10-25 15:29:59.845146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:17.175 [2024-10-25 15:29:59.845167] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:17.175 [2024-10-25 15:29:59.845207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:17.175 [2024-10-25 15:29:59.845230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:17.175 [2024-10-25 15:29:59.845249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:17.175 [2024-10-25 15:29:59.845268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:17.175 [2024-10-25 15:29:59.845286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:17.175 [2024-10-25 15:29:59.845305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:17.175 [2024-10-25 15:29:59.845325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:17.175 [2024-10-25 15:29:59.845344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:17.175 [2024-10-25 15:29:59.845363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:17.175 [2024-10-25 15:29:59.845381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:17.175 [2024-10-25 15:29:59.845400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:17.175 [2024-10-25 15:29:59.845419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:17.175 [2024-10-25 15:29:59.845439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:17.175 [2024-10-25 15:29:59.845459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:17.175 [2024-10-25 15:29:59.845479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:17.175 [2024-10-25 15:29:59.845502] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:17.176 [2024-10-25 15:29:59.845522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:17.176 [2024-10-25 15:29:59.845546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:17.176 [2024-10-25 15:29:59.845561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:17.176 [2024-10-25 15:29:59.845577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:17.176 [2024-10-25 15:29:59.845594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:17.176 [2024-10-25 15:29:59.845612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.176 [2024-10-25 15:29:59.845630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:17.176 [2024-10-25 15:29:59.845649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.491 ms 00:25:17.176 [2024-10-25 15:29:59.845667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.176 [2024-10-25 15:29:59.886219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.176 [2024-10-25 15:29:59.886268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:17.176 [2024-10-25 15:29:59.886290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.542 ms 00:25:17.176 [2024-10-25 15:29:59.886308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.176 [2024-10-25 15:29:59.886407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.176 [2024-10-25 15:29:59.886439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:17.176 [2024-10-25 15:29:59.886459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:17.176 [2024-10-25 15:29:59.886479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.435 [2024-10-25 15:29:59.945493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.435 [2024-10-25 15:29:59.945538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:17.435 [2024-10-25 15:29:59.945560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.031 ms 00:25:17.435 [2024-10-25 15:29:59.945579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.435 [2024-10-25 15:29:59.945629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.435 [2024-10-25 15:29:59.945649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:17.435 [2024-10-25 15:29:59.945668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:17.435 [2024-10-25 15:29:59.945691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.435 [2024-10-25 15:29:59.946210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.435 [2024-10-25 15:29:59.946238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:17.435 [2024-10-25 15:29:59.946258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:25:17.435 [2024-10-25 15:29:59.946276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.435 [2024-10-25 15:29:59.946411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.435 [2024-10-25 15:29:59.946435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:17.435 [2024-10-25 15:29:59.946455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:25:17.435 [2024-10-25 15:29:59.946473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.435 [2024-10-25 15:29:59.965637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.435 [2024-10-25 15:29:59.965679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:17.435 [2024-10-25 15:29:59.965700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.148 ms 00:25:17.435 [2024-10-25 15:29:59.965724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.435 [2024-10-25 15:29:59.984804] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:17.435 [2024-10-25 15:29:59.984849] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:17.435 [2024-10-25 15:29:59.984873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.435 [2024-10-25 15:29:59.984890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:17.435 [2024-10-25 15:29:59.984909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.032 ms 00:25:17.435 [2024-10-25 15:29:59.984925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.435 [2024-10-25 15:30:00.014437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.435 [2024-10-25 15:30:00.014504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:17.435 [2024-10-25 15:30:00.014527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.504 ms 00:25:17.435 [2024-10-25 15:30:00.014545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.435 [2024-10-25 15:30:00.033288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.435 [2024-10-25 15:30:00.033342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:17.435 [2024-10-25 15:30:00.033364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.703 ms 00:25:17.435 [2024-10-25 15:30:00.033382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.435 [2024-10-25 15:30:00.051535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.435 [2024-10-25 15:30:00.051578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:17.435 [2024-10-25 15:30:00.051600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.107 ms 00:25:17.435 [2024-10-25 15:30:00.051617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.435 [2024-10-25 15:30:00.052443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.435 [2024-10-25 15:30:00.052481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:17.435 [2024-10-25 15:30:00.052502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:25:17.435 [2024-10-25 15:30:00.052520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.435 [2024-10-25 15:30:00.136663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.435 [2024-10-25 15:30:00.136737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:17.435 [2024-10-25 15:30:00.136761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.234 ms 00:25:17.435 [2024-10-25 15:30:00.136783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.435 [2024-10-25 15:30:00.147638] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:17.435 [2024-10-25 15:30:00.149933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.435 [2024-10-25 15:30:00.149970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:17.435 [2024-10-25 15:30:00.149991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.039 ms 00:25:17.435 [2024-10-25 15:30:00.150008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.435 [2024-10-25 15:30:00.150128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.435 [2024-10-25 15:30:00.150161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:17.435 [2024-10-25 15:30:00.150192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:17.435 [2024-10-25 15:30:00.150213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.435 [2024-10-25 15:30:00.151706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.435 [2024-10-25 15:30:00.151753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:17.435 [2024-10-25 15:30:00.151773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:25:17.436 [2024-10-25 15:30:00.151791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.436 [2024-10-25 15:30:00.151837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.436 [2024-10-25 15:30:00.151857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:17.436 [2024-10-25 15:30:00.151875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:17.436 [2024-10-25 15:30:00.151892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.436 [2024-10-25 15:30:00.151974] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:17.436 [2024-10-25 15:30:00.152004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.436 [2024-10-25 15:30:00.152022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:17.436 [2024-10-25 15:30:00.152042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:17.436 [2024-10-25 15:30:00.152059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.694 [2024-10-25 15:30:00.188752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.694 [2024-10-25 15:30:00.188799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:17.694 [2024-10-25 15:30:00.188822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.713 ms 00:25:17.694 [2024-10-25 15:30:00.188840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.694 [2024-10-25 15:30:00.188977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.694 [2024-10-25 15:30:00.189004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:17.694 [2024-10-25 15:30:00.189026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:25:17.694 [2024-10-25 15:30:00.189045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.694 [2024-10-25 15:30:00.190080] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 381.147 ms, result 0 00:25:19.078  [2024-10-25T15:30:02.743Z] Copying: 1224/1048576 [kB] (1224 kBps) [2024-10-25T15:30:03.679Z] Copying: 10260/1048576 [kB] (9036 kBps) [2024-10-25T15:30:04.614Z] Copying: 44/1024 [MB] (34 MBps) [2024-10-25T15:30:05.553Z] Copying: 79/1024 [MB] (34 MBps) [2024-10-25T15:30:06.490Z] Copying: 113/1024 [MB] (34 MBps) [2024-10-25T15:30:07.424Z] Copying: 148/1024 [MB] (34 MBps) [2024-10-25T15:30:08.811Z] Copying: 181/1024 [MB] (33 MBps) [2024-10-25T15:30:09.748Z] Copying: 215/1024 [MB] (33 MBps) [2024-10-25T15:30:10.689Z] Copying: 249/1024 [MB] (34 MBps) [2024-10-25T15:30:11.627Z] Copying: 283/1024 [MB] (34 MBps) [2024-10-25T15:30:12.562Z] Copying: 316/1024 [MB] (32 MBps) [2024-10-25T15:30:13.496Z] Copying: 349/1024 [MB] (33 MBps) [2024-10-25T15:30:14.432Z] Copying: 383/1024 [MB] (33 MBps) [2024-10-25T15:30:15.813Z] Copying: 416/1024 [MB] (33 MBps) [2024-10-25T15:30:16.388Z] Copying: 450/1024 [MB] (33 MBps) [2024-10-25T15:30:17.768Z] Copying: 484/1024 [MB] (33 MBps) [2024-10-25T15:30:18.708Z] Copying: 517/1024 [MB] (33 MBps) [2024-10-25T15:30:19.645Z] Copying: 548/1024 [MB] (30 MBps) [2024-10-25T15:30:20.583Z] Copying: 580/1024 [MB] (32 MBps) [2024-10-25T15:30:21.520Z] Copying: 613/1024 [MB] (32 MBps) [2024-10-25T15:30:22.458Z] Copying: 646/1024 [MB] (33 MBps) [2024-10-25T15:30:23.425Z] Copying: 679/1024 [MB] (33 MBps) [2024-10-25T15:30:24.805Z] Copying: 711/1024 [MB] (32 MBps) [2024-10-25T15:30:25.374Z] Copying: 744/1024 [MB] (32 MBps) [2024-10-25T15:30:26.746Z] Copying: 777/1024 [MB] (32 MBps) [2024-10-25T15:30:27.683Z] Copying: 810/1024 [MB] (33 MBps) [2024-10-25T15:30:28.620Z] Copying: 844/1024 [MB] (33 MBps) [2024-10-25T15:30:29.556Z] Copying: 877/1024 [MB] (33 MBps) [2024-10-25T15:30:30.497Z] Copying: 911/1024 [MB] (34 MBps) [2024-10-25T15:30:31.432Z] Copying: 944/1024 [MB] (33 MBps) [2024-10-25T15:30:32.370Z] Copying: 979/1024 [MB] (34 MBps) [2024-10-25T15:30:32.938Z] Copying: 1012/1024 [MB] (33 MBps) [2024-10-25T15:30:33.507Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-10-25 15:30:33.217748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.779 [2024-10-25 15:30:33.217828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:50.779 [2024-10-25 15:30:33.217871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:50.779 [2024-10-25 15:30:33.217888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.779 [2024-10-25 15:30:33.217920] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:50.779 [2024-10-25 15:30:33.222289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.779 [2024-10-25 15:30:33.222340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:50.779 [2024-10-25 15:30:33.222360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.348 ms 00:25:50.779 [2024-10-25 15:30:33.222376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.779 [2024-10-25 15:30:33.222646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.779 [2024-10-25 15:30:33.222677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:50.779 [2024-10-25 15:30:33.222696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:25:50.779 [2024-10-25 15:30:33.222717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.779 [2024-10-25 15:30:33.232340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.779 [2024-10-25 15:30:33.232395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:50.779 [2024-10-25 15:30:33.232415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.611 ms 00:25:50.779 [2024-10-25 15:30:33.232432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.779 [2024-10-25 15:30:33.239688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.779 [2024-10-25 15:30:33.239742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:50.779 [2024-10-25 15:30:33.239760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.176 ms 00:25:50.779 [2024-10-25 15:30:33.239801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.779 [2024-10-25 15:30:33.280180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.779 [2024-10-25 15:30:33.280229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:50.779 [2024-10-25 15:30:33.280260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.267 ms 00:25:50.779 [2024-10-25 15:30:33.280271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.779 [2024-10-25 15:30:33.302521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.779 [2024-10-25 15:30:33.302563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:50.779 [2024-10-25 15:30:33.302576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.245 ms 00:25:50.779 [2024-10-25 15:30:33.302587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.779 [2024-10-25 15:30:33.304402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.779 [2024-10-25 15:30:33.304440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:50.779 [2024-10-25 15:30:33.304453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.774 ms 00:25:50.779 [2024-10-25 15:30:33.304464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.779 [2024-10-25 15:30:33.341033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.779 [2024-10-25 15:30:33.341072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:50.779 [2024-10-25 15:30:33.341085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.610 ms 00:25:50.779 [2024-10-25 15:30:33.341094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.779 [2024-10-25 15:30:33.377528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.780 [2024-10-25 15:30:33.377568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:50.780 [2024-10-25 15:30:33.377593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.438 ms 00:25:50.780 [2024-10-25 15:30:33.377618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.780 [2024-10-25 15:30:33.413111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.780 [2024-10-25 15:30:33.413166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:50.780 [2024-10-25 15:30:33.413201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.511 ms 00:25:50.780 [2024-10-25 15:30:33.413212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.780 [2024-10-25 15:30:33.447155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.780 [2024-10-25 15:30:33.447199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:50.780 [2024-10-25 15:30:33.447212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.923 ms 00:25:50.780 [2024-10-25 15:30:33.447222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.780 [2024-10-25 15:30:33.447258] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:50.780 [2024-10-25 15:30:33.447274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:50.780 [2024-10-25 15:30:33.447288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:50.780 [2024-10-25 15:30:33.447299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.447998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.448009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.448019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.448030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.448040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.448051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.448062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.448073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.448083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.448094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:50.780 [2024-10-25 15:30:33.448104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:50.781 [2024-10-25 15:30:33.448377] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:50.781 [2024-10-25 15:30:33.448387] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc8ab459-38ff-48ba-a6d3-54204f33fed3 00:25:50.781 [2024-10-25 15:30:33.448398] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:50.781 [2024-10-25 15:30:33.448408] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 158400 00:25:50.781 [2024-10-25 15:30:33.448418] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 156416 00:25:50.781 [2024-10-25 15:30:33.448428] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0127 00:25:50.781 [2024-10-25 15:30:33.448441] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:50.781 [2024-10-25 15:30:33.448451] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:50.781 [2024-10-25 15:30:33.448461] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:50.781 [2024-10-25 15:30:33.448481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:50.781 [2024-10-25 15:30:33.448489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:50.781 [2024-10-25 15:30:33.448499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.781 [2024-10-25 15:30:33.448509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:50.781 [2024-10-25 15:30:33.448519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.244 ms 00:25:50.781 [2024-10-25 15:30:33.448529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.781 [2024-10-25 15:30:33.468843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.781 [2024-10-25 15:30:33.468880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:50.781 [2024-10-25 15:30:33.468915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.311 ms 00:25:50.781 [2024-10-25 15:30:33.468925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.781 [2024-10-25 15:30:33.469480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.781 [2024-10-25 15:30:33.469498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:50.781 [2024-10-25 15:30:33.469509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:25:50.781 [2024-10-25 15:30:33.469519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.041 [2024-10-25 15:30:33.521499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.041 [2024-10-25 15:30:33.521536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:51.041 [2024-10-25 15:30:33.521549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.041 [2024-10-25 15:30:33.521560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.041 [2024-10-25 15:30:33.521614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.041 [2024-10-25 15:30:33.521625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:51.041 [2024-10-25 15:30:33.521635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.041 [2024-10-25 15:30:33.521645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.041 [2024-10-25 15:30:33.521714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.041 [2024-10-25 15:30:33.521732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:51.041 [2024-10-25 15:30:33.521742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.041 [2024-10-25 15:30:33.521752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.041 [2024-10-25 15:30:33.521769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.041 [2024-10-25 15:30:33.521779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:51.041 [2024-10-25 15:30:33.521789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.041 [2024-10-25 15:30:33.521799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.041 [2024-10-25 15:30:33.645213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.041 [2024-10-25 15:30:33.645268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:51.041 [2024-10-25 15:30:33.645284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.041 [2024-10-25 15:30:33.645295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.041 [2024-10-25 15:30:33.744859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.041 [2024-10-25 15:30:33.744906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:51.041 [2024-10-25 15:30:33.744937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.041 [2024-10-25 15:30:33.744947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.041 [2024-10-25 15:30:33.745034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.041 [2024-10-25 15:30:33.745046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:51.041 [2024-10-25 15:30:33.745057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.041 [2024-10-25 15:30:33.745070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.041 [2024-10-25 15:30:33.745107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.041 [2024-10-25 15:30:33.745118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:51.041 [2024-10-25 15:30:33.745128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.041 [2024-10-25 15:30:33.745138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.041 [2024-10-25 15:30:33.745249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.041 [2024-10-25 15:30:33.745263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:51.041 [2024-10-25 15:30:33.745273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.041 [2024-10-25 15:30:33.745288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.041 [2024-10-25 15:30:33.745321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.041 [2024-10-25 15:30:33.745333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:51.041 [2024-10-25 15:30:33.745343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.041 [2024-10-25 15:30:33.745353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.041 [2024-10-25 15:30:33.745406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.041 [2024-10-25 15:30:33.745419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:51.041 [2024-10-25 15:30:33.745429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.041 [2024-10-25 15:30:33.745439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.041 [2024-10-25 15:30:33.745485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:51.041 [2024-10-25 15:30:33.745496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:51.041 [2024-10-25 15:30:33.745507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:51.041 [2024-10-25 15:30:33.745516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.041 [2024-10-25 15:30:33.745627] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 528.718 ms, result 0 00:25:52.421 00:25:52.421 00:25:52.421 15:30:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:53.798 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:53.798 15:30:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:54.057 [2024-10-25 15:30:36.526678] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:25:54.057 [2024-10-25 15:30:36.526814] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79987 ] 00:25:54.057 [2024-10-25 15:30:36.703634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.316 [2024-10-25 15:30:36.809317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.575 [2024-10-25 15:30:37.153954] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:54.575 [2024-10-25 15:30:37.154016] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:54.835 [2024-10-25 15:30:37.314815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.835 [2024-10-25 15:30:37.314863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:54.835 [2024-10-25 15:30:37.314882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:54.835 [2024-10-25 15:30:37.314892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.835 [2024-10-25 15:30:37.314956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.835 [2024-10-25 15:30:37.314968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:54.835 [2024-10-25 15:30:37.314982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:54.835 [2024-10-25 15:30:37.314992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.835 [2024-10-25 15:30:37.315023] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:54.835 [2024-10-25 15:30:37.316092] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:54.835 [2024-10-25 15:30:37.316126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.835 [2024-10-25 15:30:37.316137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:54.835 [2024-10-25 15:30:37.316148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.118 ms 00:25:54.835 [2024-10-25 15:30:37.316158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.835 [2024-10-25 15:30:37.317587] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:54.835 [2024-10-25 15:30:37.336715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.835 [2024-10-25 15:30:37.336756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:54.835 [2024-10-25 15:30:37.336771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.159 ms 00:25:54.835 [2024-10-25 15:30:37.336781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.835 [2024-10-25 15:30:37.336858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.835 [2024-10-25 15:30:37.336873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:54.835 [2024-10-25 15:30:37.336893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:54.835 [2024-10-25 15:30:37.336903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.835 [2024-10-25 15:30:37.343629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.835 [2024-10-25 15:30:37.343662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:54.835 [2024-10-25 15:30:37.343675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.664 ms 00:25:54.835 [2024-10-25 15:30:37.343685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.835 [2024-10-25 15:30:37.343782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.835 [2024-10-25 15:30:37.343796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:54.835 [2024-10-25 15:30:37.343807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:54.835 [2024-10-25 15:30:37.343817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.835 [2024-10-25 15:30:37.343856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.835 [2024-10-25 15:30:37.343868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:54.835 [2024-10-25 15:30:37.343880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:54.835 [2024-10-25 15:30:37.343890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.835 [2024-10-25 15:30:37.343913] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:54.835 [2024-10-25 15:30:37.348819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.835 [2024-10-25 15:30:37.348854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:54.835 [2024-10-25 15:30:37.348866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.920 ms 00:25:54.835 [2024-10-25 15:30:37.348880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.835 [2024-10-25 15:30:37.348925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.835 [2024-10-25 15:30:37.348936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:54.835 [2024-10-25 15:30:37.348947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:54.835 [2024-10-25 15:30:37.348957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.835 [2024-10-25 15:30:37.349010] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:54.835 [2024-10-25 15:30:37.349033] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:54.835 [2024-10-25 15:30:37.349067] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:54.835 [2024-10-25 15:30:37.349087] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:54.835 [2024-10-25 15:30:37.349176] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:54.835 [2024-10-25 15:30:37.349189] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:54.835 [2024-10-25 15:30:37.349217] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:54.835 [2024-10-25 15:30:37.349230] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:54.835 [2024-10-25 15:30:37.349242] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:54.835 [2024-10-25 15:30:37.349253] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:54.835 [2024-10-25 15:30:37.349263] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:54.836 [2024-10-25 15:30:37.349272] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:54.836 [2024-10-25 15:30:37.349282] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:54.836 [2024-10-25 15:30:37.349296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.836 [2024-10-25 15:30:37.349306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:54.836 [2024-10-25 15:30:37.349317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:25:54.836 [2024-10-25 15:30:37.349327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.836 [2024-10-25 15:30:37.349398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.836 [2024-10-25 15:30:37.349408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:54.836 [2024-10-25 15:30:37.349419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:54.836 [2024-10-25 15:30:37.349428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.836 [2024-10-25 15:30:37.349521] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:54.836 [2024-10-25 15:30:37.349540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:54.836 [2024-10-25 15:30:37.349551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:54.836 [2024-10-25 15:30:37.349562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.836 [2024-10-25 15:30:37.349572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:54.836 [2024-10-25 15:30:37.349582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:54.836 [2024-10-25 15:30:37.349592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:54.836 [2024-10-25 15:30:37.349601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:54.836 [2024-10-25 15:30:37.349611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:54.836 [2024-10-25 15:30:37.349620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:54.836 [2024-10-25 15:30:37.349630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:54.836 [2024-10-25 15:30:37.349639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:54.836 [2024-10-25 15:30:37.349649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:54.836 [2024-10-25 15:30:37.349658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:54.836 [2024-10-25 15:30:37.349668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:54.836 [2024-10-25 15:30:37.349686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.836 [2024-10-25 15:30:37.349695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:54.836 [2024-10-25 15:30:37.349705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:54.836 [2024-10-25 15:30:37.349714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.836 [2024-10-25 15:30:37.349723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:54.836 [2024-10-25 15:30:37.349732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:54.836 [2024-10-25 15:30:37.349741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:54.836 [2024-10-25 15:30:37.349751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:54.836 [2024-10-25 15:30:37.349760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:54.836 [2024-10-25 15:30:37.349770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:54.836 [2024-10-25 15:30:37.349779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:54.836 [2024-10-25 15:30:37.349788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:54.836 [2024-10-25 15:30:37.349798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:54.836 [2024-10-25 15:30:37.349807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:54.836 [2024-10-25 15:30:37.349816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:54.836 [2024-10-25 15:30:37.349825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:54.836 [2024-10-25 15:30:37.349834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:54.836 [2024-10-25 15:30:37.349843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:54.836 [2024-10-25 15:30:37.349853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:54.836 [2024-10-25 15:30:37.349862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:54.836 [2024-10-25 15:30:37.349871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:54.836 [2024-10-25 15:30:37.349880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:54.836 [2024-10-25 15:30:37.349889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:54.836 [2024-10-25 15:30:37.349898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:54.836 [2024-10-25 15:30:37.349907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.836 [2024-10-25 15:30:37.349916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:54.836 [2024-10-25 15:30:37.349925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:54.836 [2024-10-25 15:30:37.349934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.836 [2024-10-25 15:30:37.349944] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:54.836 [2024-10-25 15:30:37.349954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:54.836 [2024-10-25 15:30:37.349964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:54.836 [2024-10-25 15:30:37.349974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:54.836 [2024-10-25 15:30:37.349984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:54.836 [2024-10-25 15:30:37.349993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:54.836 [2024-10-25 15:30:37.350003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:54.836 [2024-10-25 15:30:37.350012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:54.836 [2024-10-25 15:30:37.350021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:54.836 [2024-10-25 15:30:37.350030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:54.836 [2024-10-25 15:30:37.350040] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:54.836 [2024-10-25 15:30:37.350052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:54.836 [2024-10-25 15:30:37.350064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:54.836 [2024-10-25 15:30:37.350074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:54.836 [2024-10-25 15:30:37.350084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:54.836 [2024-10-25 15:30:37.350095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:54.836 [2024-10-25 15:30:37.350108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:54.836 [2024-10-25 15:30:37.350119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:54.836 [2024-10-25 15:30:37.350129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:54.836 [2024-10-25 15:30:37.350140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:54.836 [2024-10-25 15:30:37.350150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:54.836 [2024-10-25 15:30:37.350161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:54.837 [2024-10-25 15:30:37.350171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:54.837 [2024-10-25 15:30:37.350198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:54.837 [2024-10-25 15:30:37.350208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:54.837 [2024-10-25 15:30:37.350219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:54.837 [2024-10-25 15:30:37.350229] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:54.837 [2024-10-25 15:30:37.350241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:54.837 [2024-10-25 15:30:37.350256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:54.837 [2024-10-25 15:30:37.350266] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:54.837 [2024-10-25 15:30:37.350277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:54.837 [2024-10-25 15:30:37.350288] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:54.837 [2024-10-25 15:30:37.350299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.837 [2024-10-25 15:30:37.350310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:54.837 [2024-10-25 15:30:37.350320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:25:54.837 [2024-10-25 15:30:37.350330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.837 [2024-10-25 15:30:37.391414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.837 [2024-10-25 15:30:37.391454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:54.837 [2024-10-25 15:30:37.391468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.104 ms 00:25:54.837 [2024-10-25 15:30:37.391480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.837 [2024-10-25 15:30:37.391560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.837 [2024-10-25 15:30:37.391576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:54.837 [2024-10-25 15:30:37.391586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:54.837 [2024-10-25 15:30:37.391597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.837 [2024-10-25 15:30:37.449858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.837 [2024-10-25 15:30:37.449899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:54.837 [2024-10-25 15:30:37.449928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.295 ms 00:25:54.837 [2024-10-25 15:30:37.449939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.837 [2024-10-25 15:30:37.449976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.837 [2024-10-25 15:30:37.449987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:54.837 [2024-10-25 15:30:37.449998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:54.837 [2024-10-25 15:30:37.450012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.837 [2024-10-25 15:30:37.450511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.837 [2024-10-25 15:30:37.450536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:54.837 [2024-10-25 15:30:37.450547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:25:54.837 [2024-10-25 15:30:37.450558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.837 [2024-10-25 15:30:37.450675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.837 [2024-10-25 15:30:37.450689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:54.837 [2024-10-25 15:30:37.450700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:25:54.837 [2024-10-25 15:30:37.450710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.837 [2024-10-25 15:30:37.470587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.837 [2024-10-25 15:30:37.470626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:54.837 [2024-10-25 15:30:37.470640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.883 ms 00:25:54.837 [2024-10-25 15:30:37.470670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.837 [2024-10-25 15:30:37.489345] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:54.837 [2024-10-25 15:30:37.489384] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:54.837 [2024-10-25 15:30:37.489414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.837 [2024-10-25 15:30:37.489425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:54.837 [2024-10-25 15:30:37.489437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.666 ms 00:25:54.837 [2024-10-25 15:30:37.489446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.837 [2024-10-25 15:30:37.517954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.837 [2024-10-25 15:30:37.517996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:54.837 [2024-10-25 15:30:37.518026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.511 ms 00:25:54.837 [2024-10-25 15:30:37.518036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.837 [2024-10-25 15:30:37.535740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.837 [2024-10-25 15:30:37.535781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:54.837 [2024-10-25 15:30:37.535794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.688 ms 00:25:54.837 [2024-10-25 15:30:37.535804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.837 [2024-10-25 15:30:37.553351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.837 [2024-10-25 15:30:37.553388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:54.837 [2024-10-25 15:30:37.553416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.513 ms 00:25:54.837 [2024-10-25 15:30:37.553426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.837 [2024-10-25 15:30:37.554130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.837 [2024-10-25 15:30:37.554163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:54.837 [2024-10-25 15:30:37.554190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:25:54.837 [2024-10-25 15:30:37.554203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.096 [2024-10-25 15:30:37.648929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.097 [2024-10-25 15:30:37.648991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:55.097 [2024-10-25 15:30:37.649008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.853 ms 00:25:55.097 [2024-10-25 15:30:37.649026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.097 [2024-10-25 15:30:37.659989] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:55.097 [2024-10-25 15:30:37.662471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.097 [2024-10-25 15:30:37.662499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:55.097 [2024-10-25 15:30:37.662513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.418 ms 00:25:55.097 [2024-10-25 15:30:37.662524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.097 [2024-10-25 15:30:37.662605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.097 [2024-10-25 15:30:37.662620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:55.097 [2024-10-25 15:30:37.662631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:55.097 [2024-10-25 15:30:37.662641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.097 [2024-10-25 15:30:37.663547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.097 [2024-10-25 15:30:37.663576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:55.097 [2024-10-25 15:30:37.663588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.845 ms 00:25:55.097 [2024-10-25 15:30:37.663599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.097 [2024-10-25 15:30:37.663626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.097 [2024-10-25 15:30:37.663638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:55.097 [2024-10-25 15:30:37.663649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:55.097 [2024-10-25 15:30:37.663659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.097 [2024-10-25 15:30:37.663694] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:55.097 [2024-10-25 15:30:37.663710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.097 [2024-10-25 15:30:37.663720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:55.097 [2024-10-25 15:30:37.663730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:55.097 [2024-10-25 15:30:37.663740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.097 [2024-10-25 15:30:37.699733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.097 [2024-10-25 15:30:37.699774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:55.097 [2024-10-25 15:30:37.699788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.032 ms 00:25:55.097 [2024-10-25 15:30:37.699800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.097 [2024-10-25 15:30:37.699879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:55.097 [2024-10-25 15:30:37.699891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:55.097 [2024-10-25 15:30:37.699918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:55.097 [2024-10-25 15:30:37.699928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.097 [2024-10-25 15:30:37.700982] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 386.365 ms, result 0 00:25:56.473  [2024-10-25T15:30:40.136Z] Copying: 26/1024 [MB] (26 MBps) [2024-10-25T15:30:41.074Z] Copying: 52/1024 [MB] (26 MBps) [2024-10-25T15:30:42.011Z] Copying: 79/1024 [MB] (26 MBps) [2024-10-25T15:30:42.949Z] Copying: 106/1024 [MB] (26 MBps) [2024-10-25T15:30:44.337Z] Copying: 133/1024 [MB] (26 MBps) [2024-10-25T15:30:44.906Z] Copying: 159/1024 [MB] (26 MBps) [2024-10-25T15:30:46.300Z] Copying: 186/1024 [MB] (26 MBps) [2024-10-25T15:30:47.239Z] Copying: 213/1024 [MB] (26 MBps) [2024-10-25T15:30:48.176Z] Copying: 239/1024 [MB] (26 MBps) [2024-10-25T15:30:49.114Z] Copying: 266/1024 [MB] (26 MBps) [2024-10-25T15:30:50.050Z] Copying: 292/1024 [MB] (26 MBps) [2024-10-25T15:30:50.984Z] Copying: 318/1024 [MB] (26 MBps) [2024-10-25T15:30:51.919Z] Copying: 345/1024 [MB] (26 MBps) [2024-10-25T15:30:53.296Z] Copying: 371/1024 [MB] (26 MBps) [2024-10-25T15:30:54.247Z] Copying: 398/1024 [MB] (26 MBps) [2024-10-25T15:30:55.183Z] Copying: 424/1024 [MB] (25 MBps) [2024-10-25T15:30:56.118Z] Copying: 450/1024 [MB] (26 MBps) [2024-10-25T15:30:57.059Z] Copying: 475/1024 [MB] (25 MBps) [2024-10-25T15:30:57.995Z] Copying: 501/1024 [MB] (25 MBps) [2024-10-25T15:30:58.932Z] Copying: 528/1024 [MB] (26 MBps) [2024-10-25T15:31:00.309Z] Copying: 553/1024 [MB] (25 MBps) [2024-10-25T15:31:00.885Z] Copying: 579/1024 [MB] (25 MBps) [2024-10-25T15:31:02.265Z] Copying: 605/1024 [MB] (25 MBps) [2024-10-25T15:31:03.212Z] Copying: 631/1024 [MB] (25 MBps) [2024-10-25T15:31:04.174Z] Copying: 657/1024 [MB] (26 MBps) [2024-10-25T15:31:05.110Z] Copying: 684/1024 [MB] (26 MBps) [2024-10-25T15:31:06.043Z] Copying: 711/1024 [MB] (27 MBps) [2024-10-25T15:31:06.976Z] Copying: 738/1024 [MB] (26 MBps) [2024-10-25T15:31:07.912Z] Copying: 765/1024 [MB] (27 MBps) [2024-10-25T15:31:09.288Z] Copying: 792/1024 [MB] (27 MBps) [2024-10-25T15:31:10.223Z] Copying: 818/1024 [MB] (26 MBps) [2024-10-25T15:31:11.156Z] Copying: 846/1024 [MB] (27 MBps) [2024-10-25T15:31:12.091Z] Copying: 873/1024 [MB] (27 MBps) [2024-10-25T15:31:13.027Z] Copying: 901/1024 [MB] (27 MBps) [2024-10-25T15:31:13.965Z] Copying: 928/1024 [MB] (26 MBps) [2024-10-25T15:31:14.944Z] Copying: 954/1024 [MB] (26 MBps) [2024-10-25T15:31:15.922Z] Copying: 981/1024 [MB] (26 MBps) [2024-10-25T15:31:16.489Z] Copying: 1007/1024 [MB] (26 MBps) [2024-10-25T15:31:16.749Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-10-25 15:31:16.635887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.021 [2024-10-25 15:31:16.636029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:34.021 [2024-10-25 15:31:16.636078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:34.021 [2024-10-25 15:31:16.636113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.021 [2024-10-25 15:31:16.636210] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:34.021 [2024-10-25 15:31:16.648361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.021 [2024-10-25 15:31:16.648427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:34.021 [2024-10-25 15:31:16.648459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.113 ms 00:26:34.021 [2024-10-25 15:31:16.648498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.021 [2024-10-25 15:31:16.649021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.021 [2024-10-25 15:31:16.649067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:34.021 [2024-10-25 15:31:16.649095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:26:34.021 [2024-10-25 15:31:16.649121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.021 [2024-10-25 15:31:16.654441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.021 [2024-10-25 15:31:16.654473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:34.021 [2024-10-25 15:31:16.654501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.292 ms 00:26:34.021 [2024-10-25 15:31:16.654518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.021 [2024-10-25 15:31:16.662392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.021 [2024-10-25 15:31:16.662427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:34.021 [2024-10-25 15:31:16.662440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.849 ms 00:26:34.021 [2024-10-25 15:31:16.662452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.021 [2024-10-25 15:31:16.700490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.021 [2024-10-25 15:31:16.700530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:34.021 [2024-10-25 15:31:16.700545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.027 ms 00:26:34.021 [2024-10-25 15:31:16.700555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.021 [2024-10-25 15:31:16.721602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.021 [2024-10-25 15:31:16.721643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:34.021 [2024-10-25 15:31:16.721673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.035 ms 00:26:34.021 [2024-10-25 15:31:16.721684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.021 [2024-10-25 15:31:16.723858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.021 [2024-10-25 15:31:16.723892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:34.021 [2024-10-25 15:31:16.723912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.133 ms 00:26:34.021 [2024-10-25 15:31:16.723922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.282 [2024-10-25 15:31:16.760141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.282 [2024-10-25 15:31:16.760186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:34.282 [2024-10-25 15:31:16.760200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.258 ms 00:26:34.282 [2024-10-25 15:31:16.760212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.282 [2024-10-25 15:31:16.795935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.282 [2024-10-25 15:31:16.795984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:34.282 [2024-10-25 15:31:16.795999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.738 ms 00:26:34.282 [2024-10-25 15:31:16.796009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.282 [2024-10-25 15:31:16.831233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.282 [2024-10-25 15:31:16.831273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:34.282 [2024-10-25 15:31:16.831287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.239 ms 00:26:34.282 [2024-10-25 15:31:16.831297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.282 [2024-10-25 15:31:16.867358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.282 [2024-10-25 15:31:16.867398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:34.282 [2024-10-25 15:31:16.867412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.033 ms 00:26:34.282 [2024-10-25 15:31:16.867422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.282 [2024-10-25 15:31:16.867464] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:34.282 [2024-10-25 15:31:16.867481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:34.282 [2024-10-25 15:31:16.867500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:34.282 [2024-10-25 15:31:16.867511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.867989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.868000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.868010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:34.282 [2024-10-25 15:31:16.868021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:34.283 [2024-10-25 15:31:16.868574] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:34.283 [2024-10-25 15:31:16.868583] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc8ab459-38ff-48ba-a6d3-54204f33fed3 00:26:34.283 [2024-10-25 15:31:16.868599] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:34.283 [2024-10-25 15:31:16.868608] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:34.283 [2024-10-25 15:31:16.868620] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:34.283 [2024-10-25 15:31:16.868630] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:34.283 [2024-10-25 15:31:16.868640] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:34.283 [2024-10-25 15:31:16.868650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:34.283 [2024-10-25 15:31:16.868671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:34.283 [2024-10-25 15:31:16.868680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:34.283 [2024-10-25 15:31:16.868689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:34.283 [2024-10-25 15:31:16.868699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.283 [2024-10-25 15:31:16.868714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:34.283 [2024-10-25 15:31:16.868725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:26:34.283 [2024-10-25 15:31:16.868735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.283 [2024-10-25 15:31:16.888544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.283 [2024-10-25 15:31:16.888580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:34.283 [2024-10-25 15:31:16.888593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.801 ms 00:26:34.283 [2024-10-25 15:31:16.888604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.283 [2024-10-25 15:31:16.889189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:34.283 [2024-10-25 15:31:16.889206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:34.283 [2024-10-25 15:31:16.889217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:26:34.283 [2024-10-25 15:31:16.889233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.283 [2024-10-25 15:31:16.941496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.283 [2024-10-25 15:31:16.941538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:34.283 [2024-10-25 15:31:16.941553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.283 [2024-10-25 15:31:16.941564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.283 [2024-10-25 15:31:16.941631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.283 [2024-10-25 15:31:16.941643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:34.283 [2024-10-25 15:31:16.941653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.283 [2024-10-25 15:31:16.941669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.283 [2024-10-25 15:31:16.941747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.283 [2024-10-25 15:31:16.941760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:34.283 [2024-10-25 15:31:16.941770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.283 [2024-10-25 15:31:16.941781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.283 [2024-10-25 15:31:16.941797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.283 [2024-10-25 15:31:16.941809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:34.283 [2024-10-25 15:31:16.941819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.283 [2024-10-25 15:31:16.941828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.543 [2024-10-25 15:31:17.066191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.543 [2024-10-25 15:31:17.066253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:34.543 [2024-10-25 15:31:17.066285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.543 [2024-10-25 15:31:17.066296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.543 [2024-10-25 15:31:17.167255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.543 [2024-10-25 15:31:17.167315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:34.543 [2024-10-25 15:31:17.167347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.543 [2024-10-25 15:31:17.167364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.543 [2024-10-25 15:31:17.167449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.543 [2024-10-25 15:31:17.167461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:34.543 [2024-10-25 15:31:17.167471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.543 [2024-10-25 15:31:17.167482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.543 [2024-10-25 15:31:17.167530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.543 [2024-10-25 15:31:17.167541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:34.543 [2024-10-25 15:31:17.167552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.543 [2024-10-25 15:31:17.167562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.543 [2024-10-25 15:31:17.167663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.543 [2024-10-25 15:31:17.167676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:34.543 [2024-10-25 15:31:17.167686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.543 [2024-10-25 15:31:17.167697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.543 [2024-10-25 15:31:17.167730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.543 [2024-10-25 15:31:17.167743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:34.543 [2024-10-25 15:31:17.167754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.543 [2024-10-25 15:31:17.167764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.543 [2024-10-25 15:31:17.167804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.543 [2024-10-25 15:31:17.167815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:34.543 [2024-10-25 15:31:17.167826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.543 [2024-10-25 15:31:17.167836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.543 [2024-10-25 15:31:17.167878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.543 [2024-10-25 15:31:17.167890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:34.543 [2024-10-25 15:31:17.167900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.543 [2024-10-25 15:31:17.167910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.544 [2024-10-25 15:31:17.168026] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 533.703 ms, result 0 00:26:35.921 00:26:35.921 00:26:35.921 15:31:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:37.299 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:37.299 15:31:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:37.299 15:31:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:37.299 15:31:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:37.299 15:31:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:37.557 15:31:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:37.557 15:31:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:37.557 15:31:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:37.557 15:31:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78245 00:26:37.557 Process with pid 78245 is not found 00:26:37.557 15:31:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 78245 ']' 00:26:37.557 15:31:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 78245 00:26:37.557 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (78245) - No such process 00:26:37.557 15:31:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 78245 is not found' 00:26:37.557 15:31:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:37.815 15:31:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:37.815 Remove shared memory files 00:26:37.815 15:31:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:37.815 15:31:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:37.815 15:31:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:37.815 15:31:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:37.815 15:31:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:37.815 15:31:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:38.073 00:26:38.073 real 3m29.586s 00:26:38.073 user 3m56.457s 00:26:38.073 sys 0m37.659s 00:26:38.073 15:31:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:38.073 15:31:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:38.073 ************************************ 00:26:38.073 END TEST ftl_dirty_shutdown 00:26:38.073 ************************************ 00:26:38.073 15:31:20 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:38.073 15:31:20 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:38.073 15:31:20 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:38.073 15:31:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:38.073 ************************************ 00:26:38.073 START TEST ftl_upgrade_shutdown 00:26:38.073 ************************************ 00:26:38.073 15:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:38.073 * Looking for test storage... 00:26:38.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:38.073 15:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:26:38.073 15:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1689 -- # lcov --version 00:26:38.073 15:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:26:38.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.332 --rc genhtml_branch_coverage=1 00:26:38.332 --rc genhtml_function_coverage=1 00:26:38.332 --rc genhtml_legend=1 00:26:38.332 --rc geninfo_all_blocks=1 00:26:38.332 --rc geninfo_unexecuted_blocks=1 00:26:38.332 00:26:38.332 ' 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:26:38.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.332 --rc genhtml_branch_coverage=1 00:26:38.332 --rc genhtml_function_coverage=1 00:26:38.332 --rc genhtml_legend=1 00:26:38.332 --rc geninfo_all_blocks=1 00:26:38.332 --rc geninfo_unexecuted_blocks=1 00:26:38.332 00:26:38.332 ' 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:26:38.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.332 --rc genhtml_branch_coverage=1 00:26:38.332 --rc genhtml_function_coverage=1 00:26:38.332 --rc genhtml_legend=1 00:26:38.332 --rc geninfo_all_blocks=1 00:26:38.332 --rc geninfo_unexecuted_blocks=1 00:26:38.332 00:26:38.332 ' 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:26:38.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.332 --rc genhtml_branch_coverage=1 00:26:38.332 --rc genhtml_function_coverage=1 00:26:38.332 --rc genhtml_legend=1 00:26:38.332 --rc geninfo_all_blocks=1 00:26:38.332 --rc geninfo_unexecuted_blocks=1 00:26:38.332 00:26:38.332 ' 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80518 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80518 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 80518 ']' 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:38.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.332 15:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.333 15:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:38.333 15:31:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:38.333 [2024-10-25 15:31:20.975186] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:26:38.333 [2024-10-25 15:31:20.975768] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80518 ] 00:26:38.590 [2024-10-25 15:31:21.158504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.590 [2024-10-25 15:31:21.263618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:39.524 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:26:39.782 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:39.782 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:39.782 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:39.782 15:31:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:26:39.782 15:31:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:39.782 15:31:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:39.782 15:31:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:39.782 15:31:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:40.041 15:31:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:40.041 { 00:26:40.041 "name": "basen1", 00:26:40.041 "aliases": [ 00:26:40.041 "593757d9-8e83-4c84-b579-d207d333ba0b" 00:26:40.041 ], 00:26:40.041 "product_name": "NVMe disk", 00:26:40.041 "block_size": 4096, 00:26:40.041 "num_blocks": 1310720, 00:26:40.041 "uuid": "593757d9-8e83-4c84-b579-d207d333ba0b", 00:26:40.041 "numa_id": -1, 00:26:40.041 "assigned_rate_limits": { 00:26:40.041 "rw_ios_per_sec": 0, 00:26:40.041 "rw_mbytes_per_sec": 0, 00:26:40.041 "r_mbytes_per_sec": 0, 00:26:40.041 "w_mbytes_per_sec": 0 00:26:40.041 }, 00:26:40.041 "claimed": true, 00:26:40.041 "claim_type": "read_many_write_one", 00:26:40.041 "zoned": false, 00:26:40.041 "supported_io_types": { 00:26:40.041 "read": true, 00:26:40.041 "write": true, 00:26:40.041 "unmap": true, 00:26:40.041 "flush": true, 00:26:40.041 "reset": true, 00:26:40.041 "nvme_admin": true, 00:26:40.041 "nvme_io": true, 00:26:40.041 "nvme_io_md": false, 00:26:40.041 "write_zeroes": true, 00:26:40.041 "zcopy": false, 00:26:40.041 "get_zone_info": false, 00:26:40.041 "zone_management": false, 00:26:40.041 "zone_append": false, 00:26:40.041 "compare": true, 00:26:40.041 "compare_and_write": false, 00:26:40.041 "abort": true, 00:26:40.041 "seek_hole": false, 00:26:40.041 "seek_data": false, 00:26:40.041 "copy": true, 00:26:40.041 "nvme_iov_md": false 00:26:40.041 }, 00:26:40.041 "driver_specific": { 00:26:40.041 "nvme": [ 00:26:40.041 { 00:26:40.041 "pci_address": "0000:00:11.0", 00:26:40.041 "trid": { 00:26:40.041 "trtype": "PCIe", 00:26:40.041 "traddr": "0000:00:11.0" 00:26:40.041 }, 00:26:40.041 "ctrlr_data": { 00:26:40.041 "cntlid": 0, 00:26:40.041 "vendor_id": "0x1b36", 00:26:40.041 "model_number": "QEMU NVMe Ctrl", 00:26:40.041 "serial_number": "12341", 00:26:40.041 "firmware_revision": "8.0.0", 00:26:40.041 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:40.041 "oacs": { 00:26:40.041 "security": 0, 00:26:40.041 "format": 1, 00:26:40.041 "firmware": 0, 00:26:40.041 "ns_manage": 1 00:26:40.041 }, 00:26:40.041 "multi_ctrlr": false, 00:26:40.041 "ana_reporting": false 00:26:40.041 }, 00:26:40.041 "vs": { 00:26:40.041 "nvme_version": "1.4" 00:26:40.041 }, 00:26:40.041 "ns_data": { 00:26:40.041 "id": 1, 00:26:40.041 "can_share": false 00:26:40.041 } 00:26:40.041 } 00:26:40.041 ], 00:26:40.041 "mp_policy": "active_passive" 00:26:40.041 } 00:26:40.041 } 00:26:40.041 ]' 00:26:40.041 15:31:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:40.041 15:31:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:40.041 15:31:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:40.041 15:31:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:26:40.041 15:31:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:26:40.041 15:31:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:26:40.041 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:40.041 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:40.041 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:40.041 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:40.041 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:40.300 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=63074878-b802-43f7-82d2-130c1ec00bc7 00:26:40.300 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:40.300 15:31:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 63074878-b802-43f7-82d2-130c1ec00bc7 00:26:40.585 15:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:40.585 15:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=1bfeaa74-09b1-4415-9545-0a9d8f5194b5 00:26:40.585 15:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 1bfeaa74-09b1-4415-9545-0a9d8f5194b5 00:26:40.843 15:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=9efa826c-51bf-4ef5-b6c7-6af8760638b1 00:26:40.843 15:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 9efa826c-51bf-4ef5-b6c7-6af8760638b1 ]] 00:26:40.843 15:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 9efa826c-51bf-4ef5-b6c7-6af8760638b1 5120 00:26:40.843 15:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:26:40.843 15:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:40.844 15:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=9efa826c-51bf-4ef5-b6c7-6af8760638b1 00:26:40.844 15:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:26:40.844 15:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 9efa826c-51bf-4ef5-b6c7-6af8760638b1 00:26:40.844 15:31:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=9efa826c-51bf-4ef5-b6c7-6af8760638b1 00:26:40.844 15:31:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:26:40.844 15:31:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:26:40.844 15:31:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:26:40.844 15:31:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9efa826c-51bf-4ef5-b6c7-6af8760638b1 00:26:41.102 15:31:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:26:41.102 { 00:26:41.102 "name": "9efa826c-51bf-4ef5-b6c7-6af8760638b1", 00:26:41.102 "aliases": [ 00:26:41.102 "lvs/basen1p0" 00:26:41.102 ], 00:26:41.102 "product_name": "Logical Volume", 00:26:41.102 "block_size": 4096, 00:26:41.102 "num_blocks": 5242880, 00:26:41.102 "uuid": "9efa826c-51bf-4ef5-b6c7-6af8760638b1", 00:26:41.102 "assigned_rate_limits": { 00:26:41.102 "rw_ios_per_sec": 0, 00:26:41.102 "rw_mbytes_per_sec": 0, 00:26:41.102 "r_mbytes_per_sec": 0, 00:26:41.102 "w_mbytes_per_sec": 0 00:26:41.102 }, 00:26:41.102 "claimed": false, 00:26:41.102 "zoned": false, 00:26:41.102 "supported_io_types": { 00:26:41.102 "read": true, 00:26:41.102 "write": true, 00:26:41.102 "unmap": true, 00:26:41.102 "flush": false, 00:26:41.102 "reset": true, 00:26:41.102 "nvme_admin": false, 00:26:41.102 "nvme_io": false, 00:26:41.103 "nvme_io_md": false, 00:26:41.103 "write_zeroes": true, 00:26:41.103 "zcopy": false, 00:26:41.103 "get_zone_info": false, 00:26:41.103 "zone_management": false, 00:26:41.103 "zone_append": false, 00:26:41.103 "compare": false, 00:26:41.103 "compare_and_write": false, 00:26:41.103 "abort": false, 00:26:41.103 "seek_hole": true, 00:26:41.103 "seek_data": true, 00:26:41.103 "copy": false, 00:26:41.103 "nvme_iov_md": false 00:26:41.103 }, 00:26:41.103 "driver_specific": { 00:26:41.103 "lvol": { 00:26:41.103 "lvol_store_uuid": "1bfeaa74-09b1-4415-9545-0a9d8f5194b5", 00:26:41.103 "base_bdev": "basen1", 00:26:41.103 "thin_provision": true, 00:26:41.103 "num_allocated_clusters": 0, 00:26:41.103 "snapshot": false, 00:26:41.103 "clone": false, 00:26:41.103 "esnap_clone": false 00:26:41.103 } 00:26:41.103 } 00:26:41.103 } 00:26:41.103 ]' 00:26:41.103 15:31:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:26:41.103 15:31:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:26:41.103 15:31:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:26:41.103 15:31:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:26:41.103 15:31:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:26:41.103 15:31:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:26:41.103 15:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:26:41.103 15:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:41.103 15:31:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:26:41.362 15:31:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:41.362 15:31:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:41.362 15:31:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:41.621 15:31:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:41.621 15:31:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:41.621 15:31:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 9efa826c-51bf-4ef5-b6c7-6af8760638b1 -c cachen1p0 --l2p_dram_limit 2 00:26:41.880 [2024-10-25 15:31:24.458121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.880 [2024-10-25 15:31:24.458187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:41.880 [2024-10-25 15:31:24.458223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:41.880 [2024-10-25 15:31:24.458234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.880 [2024-10-25 15:31:24.458298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.880 [2024-10-25 15:31:24.458310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:41.880 [2024-10-25 15:31:24.458324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:26:41.881 [2024-10-25 15:31:24.458334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.881 [2024-10-25 15:31:24.458357] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:41.881 [2024-10-25 15:31:24.459415] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:41.881 [2024-10-25 15:31:24.459453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.881 [2024-10-25 15:31:24.459464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:41.881 [2024-10-25 15:31:24.459480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.099 ms 00:26:41.881 [2024-10-25 15:31:24.459491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.881 [2024-10-25 15:31:24.459573] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID ca70ed1e-89de-4b94-accc-9539f64b4874 00:26:41.881 [2024-10-25 15:31:24.461022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.881 [2024-10-25 15:31:24.461061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:41.881 [2024-10-25 15:31:24.461073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:41.881 [2024-10-25 15:31:24.461086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.881 [2024-10-25 15:31:24.468550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.881 [2024-10-25 15:31:24.468588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:41.881 [2024-10-25 15:31:24.468616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.433 ms 00:26:41.881 [2024-10-25 15:31:24.468632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.881 [2024-10-25 15:31:24.468680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.881 [2024-10-25 15:31:24.468696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:41.881 [2024-10-25 15:31:24.468707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:26:41.881 [2024-10-25 15:31:24.468722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.881 [2024-10-25 15:31:24.468778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.881 [2024-10-25 15:31:24.468793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:41.881 [2024-10-25 15:31:24.468804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:26:41.881 [2024-10-25 15:31:24.468818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.881 [2024-10-25 15:31:24.468846] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:41.881 [2024-10-25 15:31:24.473766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.881 [2024-10-25 15:31:24.473802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:41.881 [2024-10-25 15:31:24.473834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.935 ms 00:26:41.881 [2024-10-25 15:31:24.473849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.881 [2024-10-25 15:31:24.473879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.881 [2024-10-25 15:31:24.473890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:41.881 [2024-10-25 15:31:24.473903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:41.881 [2024-10-25 15:31:24.473913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.881 [2024-10-25 15:31:24.473980] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:41.881 [2024-10-25 15:31:24.474115] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:41.881 [2024-10-25 15:31:24.474135] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:41.881 [2024-10-25 15:31:24.474149] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:41.881 [2024-10-25 15:31:24.474165] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:41.881 [2024-10-25 15:31:24.474190] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:41.881 [2024-10-25 15:31:24.474205] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:41.881 [2024-10-25 15:31:24.474215] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:41.881 [2024-10-25 15:31:24.474227] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:41.881 [2024-10-25 15:31:24.474237] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:41.881 [2024-10-25 15:31:24.474254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.881 [2024-10-25 15:31:24.474264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:41.881 [2024-10-25 15:31:24.474277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.297 ms 00:26:41.881 [2024-10-25 15:31:24.474288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.881 [2024-10-25 15:31:24.474366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.881 [2024-10-25 15:31:24.474377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:41.881 [2024-10-25 15:31:24.474391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:26:41.881 [2024-10-25 15:31:24.474410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.881 [2024-10-25 15:31:24.474497] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:41.881 [2024-10-25 15:31:24.474512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:41.881 [2024-10-25 15:31:24.474525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:41.881 [2024-10-25 15:31:24.474535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:41.881 [2024-10-25 15:31:24.474548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:41.881 [2024-10-25 15:31:24.474557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:41.881 [2024-10-25 15:31:24.474569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:41.881 [2024-10-25 15:31:24.474578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:41.881 [2024-10-25 15:31:24.474591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:41.881 [2024-10-25 15:31:24.474600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:41.881 [2024-10-25 15:31:24.474611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:41.881 [2024-10-25 15:31:24.474620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:41.881 [2024-10-25 15:31:24.474631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:41.881 [2024-10-25 15:31:24.474642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:41.881 [2024-10-25 15:31:24.474654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:41.881 [2024-10-25 15:31:24.474663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:41.881 [2024-10-25 15:31:24.474677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:41.881 [2024-10-25 15:31:24.474686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:41.881 [2024-10-25 15:31:24.474697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:41.881 [2024-10-25 15:31:24.474707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:41.881 [2024-10-25 15:31:24.474720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:41.881 [2024-10-25 15:31:24.474729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:41.881 [2024-10-25 15:31:24.474740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:41.881 [2024-10-25 15:31:24.474749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:41.881 [2024-10-25 15:31:24.474761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:41.881 [2024-10-25 15:31:24.474770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:41.881 [2024-10-25 15:31:24.474781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:41.881 [2024-10-25 15:31:24.474790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:41.881 [2024-10-25 15:31:24.474802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:41.881 [2024-10-25 15:31:24.474811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:41.881 [2024-10-25 15:31:24.474822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:41.881 [2024-10-25 15:31:24.474831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:41.881 [2024-10-25 15:31:24.474845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:41.881 [2024-10-25 15:31:24.474854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:41.881 [2024-10-25 15:31:24.474865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:41.881 [2024-10-25 15:31:24.474874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:41.881 [2024-10-25 15:31:24.474885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:41.881 [2024-10-25 15:31:24.474894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:41.881 [2024-10-25 15:31:24.474906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:41.881 [2024-10-25 15:31:24.474915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:41.881 [2024-10-25 15:31:24.474926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:41.881 [2024-10-25 15:31:24.474935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:41.881 [2024-10-25 15:31:24.474946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:41.881 [2024-10-25 15:31:24.474954] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:41.881 [2024-10-25 15:31:24.474966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:41.881 [2024-10-25 15:31:24.474978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:41.881 [2024-10-25 15:31:24.474990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:41.881 [2024-10-25 15:31:24.475001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:41.881 [2024-10-25 15:31:24.475016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:41.881 [2024-10-25 15:31:24.475033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:41.882 [2024-10-25 15:31:24.475045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:41.882 [2024-10-25 15:31:24.475054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:41.882 [2024-10-25 15:31:24.475065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:41.882 [2024-10-25 15:31:24.475079] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:41.882 [2024-10-25 15:31:24.475093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:41.882 [2024-10-25 15:31:24.475105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:41.882 [2024-10-25 15:31:24.475118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:41.882 [2024-10-25 15:31:24.475128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:41.882 [2024-10-25 15:31:24.475141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:41.882 [2024-10-25 15:31:24.475151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:41.882 [2024-10-25 15:31:24.475164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:41.882 [2024-10-25 15:31:24.475174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:41.882 [2024-10-25 15:31:24.475202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:41.882 [2024-10-25 15:31:24.475212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:41.882 [2024-10-25 15:31:24.475227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:41.882 [2024-10-25 15:31:24.475238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:41.882 [2024-10-25 15:31:24.475251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:41.882 [2024-10-25 15:31:24.475261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:41.882 [2024-10-25 15:31:24.475274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:41.882 [2024-10-25 15:31:24.475284] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:41.882 [2024-10-25 15:31:24.475298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:41.882 [2024-10-25 15:31:24.475313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:41.882 [2024-10-25 15:31:24.475326] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:41.882 [2024-10-25 15:31:24.475335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:41.882 [2024-10-25 15:31:24.475348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:41.882 [2024-10-25 15:31:24.475358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.882 [2024-10-25 15:31:24.475371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:41.882 [2024-10-25 15:31:24.475381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.921 ms 00:26:41.882 [2024-10-25 15:31:24.475394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.882 [2024-10-25 15:31:24.475434] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:41.882 [2024-10-25 15:31:24.475453] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:48.442 [2024-10-25 15:31:29.949822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.442 [2024-10-25 15:31:29.949891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:48.442 [2024-10-25 15:31:29.949908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5483.274 ms 00:26:48.442 [2024-10-25 15:31:29.949937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.442 [2024-10-25 15:31:29.986695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.442 [2024-10-25 15:31:29.986750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:48.442 [2024-10-25 15:31:29.986766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.392 ms 00:26:48.442 [2024-10-25 15:31:29.986795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.442 [2024-10-25 15:31:29.986873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.442 [2024-10-25 15:31:29.986889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:48.442 [2024-10-25 15:31:29.986900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:48.442 [2024-10-25 15:31:29.986915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.442 [2024-10-25 15:31:30.033498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.442 [2024-10-25 15:31:30.033544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:48.442 [2024-10-25 15:31:30.033559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.616 ms 00:26:48.442 [2024-10-25 15:31:30.033572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.442 [2024-10-25 15:31:30.033609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.442 [2024-10-25 15:31:30.033625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:48.442 [2024-10-25 15:31:30.033636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:48.442 [2024-10-25 15:31:30.033652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.442 [2024-10-25 15:31:30.034139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.442 [2024-10-25 15:31:30.034163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:48.442 [2024-10-25 15:31:30.034175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.420 ms 00:26:48.442 [2024-10-25 15:31:30.034201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.442 [2024-10-25 15:31:30.034251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.442 [2024-10-25 15:31:30.034264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:48.442 [2024-10-25 15:31:30.034275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:48.442 [2024-10-25 15:31:30.034291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.442 [2024-10-25 15:31:30.055019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.442 [2024-10-25 15:31:30.055089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:48.442 [2024-10-25 15:31:30.055103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.737 ms 00:26:48.442 [2024-10-25 15:31:30.055119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.442 [2024-10-25 15:31:30.067744] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:48.442 [2024-10-25 15:31:30.068785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.442 [2024-10-25 15:31:30.068812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:48.442 [2024-10-25 15:31:30.068827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.592 ms 00:26:48.442 [2024-10-25 15:31:30.068837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.442 [2024-10-25 15:31:30.118484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.442 [2024-10-25 15:31:30.118529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:48.442 [2024-10-25 15:31:30.118563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 49.693 ms 00:26:48.442 [2024-10-25 15:31:30.118574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.442 [2024-10-25 15:31:30.118664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.443 [2024-10-25 15:31:30.118678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:48.443 [2024-10-25 15:31:30.118694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:26:48.443 [2024-10-25 15:31:30.118708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.443 [2024-10-25 15:31:30.155423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.443 [2024-10-25 15:31:30.155463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:48.443 [2024-10-25 15:31:30.155496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.721 ms 00:26:48.443 [2024-10-25 15:31:30.155507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.443 [2024-10-25 15:31:30.192168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.443 [2024-10-25 15:31:30.192213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:48.443 [2024-10-25 15:31:30.192244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.674 ms 00:26:48.443 [2024-10-25 15:31:30.192254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.443 [2024-10-25 15:31:30.193014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.443 [2024-10-25 15:31:30.193042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:48.443 [2024-10-25 15:31:30.193056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.721 ms 00:26:48.443 [2024-10-25 15:31:30.193066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.443 [2024-10-25 15:31:30.316854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.443 [2024-10-25 15:31:30.316898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:48.443 [2024-10-25 15:31:30.316918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 123.930 ms 00:26:48.443 [2024-10-25 15:31:30.316944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.443 [2024-10-25 15:31:30.354073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.443 [2024-10-25 15:31:30.354115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:48.443 [2024-10-25 15:31:30.354145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.105 ms 00:26:48.443 [2024-10-25 15:31:30.354172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.443 [2024-10-25 15:31:30.389787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.443 [2024-10-25 15:31:30.389826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:26:48.443 [2024-10-25 15:31:30.389858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.620 ms 00:26:48.443 [2024-10-25 15:31:30.389868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.443 [2024-10-25 15:31:30.425366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.443 [2024-10-25 15:31:30.425406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:48.443 [2024-10-25 15:31:30.425439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.512 ms 00:26:48.443 [2024-10-25 15:31:30.425449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.443 [2024-10-25 15:31:30.425495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.443 [2024-10-25 15:31:30.425507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:48.443 [2024-10-25 15:31:30.425523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:48.443 [2024-10-25 15:31:30.425533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.443 [2024-10-25 15:31:30.425634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:48.443 [2024-10-25 15:31:30.425647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:48.443 [2024-10-25 15:31:30.425659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:48.443 [2024-10-25 15:31:30.425669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:48.443 [2024-10-25 15:31:30.426918] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 5978.069 ms, result 0 00:26:48.443 { 00:26:48.443 "name": "ftl", 00:26:48.443 "uuid": "ca70ed1e-89de-4b94-accc-9539f64b4874" 00:26:48.443 } 00:26:48.443 15:31:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:48.443 [2024-10-25 15:31:30.645581] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.443 15:31:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:48.443 15:31:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:48.443 [2024-10-25 15:31:31.037293] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:48.443 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:48.701 [2024-10-25 15:31:31.218748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:48.701 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:48.960 Fill FTL, iteration 1 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80668 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80668 /var/tmp/spdk.tgt.sock 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 80668 ']' 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:48.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:48.960 15:31:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:48.960 [2024-10-25 15:31:31.667641] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:26:48.960 [2024-10-25 15:31:31.667768] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80668 ] 00:26:49.219 [2024-10-25 15:31:31.845589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.478 [2024-10-25 15:31:31.955070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.413 15:31:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:50.413 15:31:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:26:50.413 15:31:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:50.413 ftln1 00:26:50.413 15:31:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:50.413 15:31:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:50.672 15:31:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:26:50.672 15:31:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80668 00:26:50.672 15:31:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 80668 ']' 00:26:50.672 15:31:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 80668 00:26:50.672 15:31:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:26:50.672 15:31:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:50.672 15:31:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80668 00:26:50.672 15:31:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:50.672 15:31:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:50.672 killing process with pid 80668 00:26:50.672 15:31:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80668' 00:26:50.672 15:31:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 80668 00:26:50.672 15:31:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 80668 00:26:53.205 15:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:53.205 15:31:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:53.205 [2024-10-25 15:31:35.707370] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:26:53.205 [2024-10-25 15:31:35.707491] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80721 ] 00:26:53.205 [2024-10-25 15:31:35.882329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.463 [2024-10-25 15:31:35.994323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.839  [2024-10-25T15:31:38.504Z] Copying: 246/1024 [MB] (246 MBps) [2024-10-25T15:31:39.881Z] Copying: 491/1024 [MB] (245 MBps) [2024-10-25T15:31:40.817Z] Copying: 736/1024 [MB] (245 MBps) [2024-10-25T15:31:40.817Z] Copying: 982/1024 [MB] (246 MBps) [2024-10-25T15:31:41.753Z] Copying: 1024/1024 [MB] (average 245 MBps) 00:26:59.025 00:26:59.283 Calculate MD5 checksum, iteration 1 00:26:59.283 15:31:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:59.283 15:31:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:59.283 15:31:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:59.283 15:31:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:59.283 15:31:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:59.283 15:31:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:59.283 15:31:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:59.283 15:31:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:59.283 [2024-10-25 15:31:41.867687] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:26:59.283 [2024-10-25 15:31:41.867807] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80785 ] 00:26:59.541 [2024-10-25 15:31:42.032772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.541 [2024-10-25 15:31:42.151423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.919  [2024-10-25T15:31:44.215Z] Copying: 706/1024 [MB] (706 MBps) [2024-10-25T15:31:45.150Z] Copying: 1024/1024 [MB] (average 699 MBps) 00:27:02.422 00:27:02.422 15:31:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:02.422 15:31:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:04.394 15:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:04.394 Fill FTL, iteration 2 00:27:04.394 15:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=c6bab96360266500229e38b9afc68a9b 00:27:04.394 15:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:04.394 15:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:04.394 15:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:04.395 15:31:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:04.395 15:31:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:04.395 15:31:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:04.395 15:31:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:04.395 15:31:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:04.395 15:31:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:04.395 [2024-10-25 15:31:46.756465] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:27:04.395 [2024-10-25 15:31:46.756581] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80835 ] 00:27:04.395 [2024-10-25 15:31:46.929840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.395 [2024-10-25 15:31:47.044867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.771  [2024-10-25T15:31:49.876Z] Copying: 248/1024 [MB] (248 MBps) [2024-10-25T15:31:50.814Z] Copying: 493/1024 [MB] (245 MBps) [2024-10-25T15:31:51.747Z] Copying: 741/1024 [MB] (248 MBps) [2024-10-25T15:31:51.747Z] Copying: 988/1024 [MB] (247 MBps) [2024-10-25T15:31:53.127Z] Copying: 1024/1024 [MB] (average 246 MBps) 00:27:10.399 00:27:10.399 Calculate MD5 checksum, iteration 2 00:27:10.399 15:31:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:10.399 15:31:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:10.399 15:31:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:10.399 15:31:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:10.399 15:31:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:10.399 15:31:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:10.399 15:31:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:10.399 15:31:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:10.399 [2024-10-25 15:31:52.813875] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:27:10.399 [2024-10-25 15:31:52.814012] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80901 ] 00:27:10.399 [2024-10-25 15:31:52.974832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.399 [2024-10-25 15:31:53.086189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.299  [2024-10-25T15:31:55.286Z] Copying: 703/1024 [MB] (703 MBps) [2024-10-25T15:31:56.663Z] Copying: 1024/1024 [MB] (average 698 MBps) 00:27:13.935 00:27:13.935 15:31:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:13.935 15:31:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:15.839 15:31:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:15.839 15:31:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=2fbea36c5e46d98965d65389209f432e 00:27:15.839 15:31:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:15.839 15:31:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:15.839 15:31:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:15.839 [2024-10-25 15:31:58.536376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.839 [2024-10-25 15:31:58.536428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:15.839 [2024-10-25 15:31:58.536459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:15.839 [2024-10-25 15:31:58.536469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.839 [2024-10-25 15:31:58.536496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.839 [2024-10-25 15:31:58.536507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:15.839 [2024-10-25 15:31:58.536517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:15.839 [2024-10-25 15:31:58.536527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.839 [2024-10-25 15:31:58.536551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:15.839 [2024-10-25 15:31:58.536562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:15.839 [2024-10-25 15:31:58.536572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:15.839 [2024-10-25 15:31:58.536581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:15.839 [2024-10-25 15:31:58.536640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.256 ms, result 0 00:27:15.839 true 00:27:15.839 15:31:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:16.126 { 00:27:16.126 "name": "ftl", 00:27:16.126 "properties": [ 00:27:16.126 { 00:27:16.126 "name": "superblock_version", 00:27:16.126 "value": 5, 00:27:16.126 "read-only": true 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "name": "base_device", 00:27:16.126 "bands": [ 00:27:16.126 { 00:27:16.126 "id": 0, 00:27:16.126 "state": "FREE", 00:27:16.126 "validity": 0.0 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 1, 00:27:16.126 "state": "FREE", 00:27:16.126 "validity": 0.0 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 2, 00:27:16.126 "state": "FREE", 00:27:16.126 "validity": 0.0 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 3, 00:27:16.126 "state": "FREE", 00:27:16.126 "validity": 0.0 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 4, 00:27:16.126 "state": "FREE", 00:27:16.126 "validity": 0.0 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 5, 00:27:16.126 "state": "FREE", 00:27:16.126 "validity": 0.0 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 6, 00:27:16.126 "state": "FREE", 00:27:16.126 "validity": 0.0 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 7, 00:27:16.126 "state": "FREE", 00:27:16.126 "validity": 0.0 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 8, 00:27:16.126 "state": "FREE", 00:27:16.126 "validity": 0.0 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 9, 00:27:16.126 "state": "FREE", 00:27:16.126 "validity": 0.0 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 10, 00:27:16.126 "state": "FREE", 00:27:16.126 "validity": 0.0 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 11, 00:27:16.126 "state": "FREE", 00:27:16.126 "validity": 0.0 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 12, 00:27:16.126 "state": "FREE", 00:27:16.126 "validity": 0.0 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 13, 00:27:16.126 "state": "FREE", 00:27:16.126 "validity": 0.0 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 14, 00:27:16.126 "state": "FREE", 00:27:16.126 "validity": 0.0 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 15, 00:27:16.126 "state": "FREE", 00:27:16.126 "validity": 0.0 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 16, 00:27:16.126 "state": "FREE", 00:27:16.126 "validity": 0.0 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 17, 00:27:16.126 "state": "FREE", 00:27:16.126 "validity": 0.0 00:27:16.126 } 00:27:16.126 ], 00:27:16.126 "read-only": true 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "name": "cache_device", 00:27:16.126 "type": "bdev", 00:27:16.126 "chunks": [ 00:27:16.126 { 00:27:16.126 "id": 0, 00:27:16.126 "state": "INACTIVE", 00:27:16.126 "utilization": 0.0 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 1, 00:27:16.126 "state": "CLOSED", 00:27:16.126 "utilization": 1.0 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 2, 00:27:16.126 "state": "CLOSED", 00:27:16.126 "utilization": 1.0 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 3, 00:27:16.126 "state": "OPEN", 00:27:16.126 "utilization": 0.001953125 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "id": 4, 00:27:16.126 "state": "OPEN", 00:27:16.126 "utilization": 0.0 00:27:16.126 } 00:27:16.126 ], 00:27:16.126 "read-only": true 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "name": "verbose_mode", 00:27:16.126 "value": true, 00:27:16.126 "unit": "", 00:27:16.126 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:16.126 }, 00:27:16.126 { 00:27:16.126 "name": "prep_upgrade_on_shutdown", 00:27:16.126 "value": false, 00:27:16.126 "unit": "", 00:27:16.126 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:16.126 } 00:27:16.126 ] 00:27:16.126 } 00:27:16.126 15:31:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:16.385 [2024-10-25 15:31:58.900048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:16.385 [2024-10-25 15:31:58.900092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:16.385 [2024-10-25 15:31:58.900116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:16.385 [2024-10-25 15:31:58.900127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.385 [2024-10-25 15:31:58.900167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:16.385 [2024-10-25 15:31:58.900177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:16.385 [2024-10-25 15:31:58.900188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:16.385 [2024-10-25 15:31:58.900197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.385 [2024-10-25 15:31:58.900229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:16.385 [2024-10-25 15:31:58.900239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:16.385 [2024-10-25 15:31:58.900249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:16.385 [2024-10-25 15:31:58.900258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.385 [2024-10-25 15:31:58.900312] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.256 ms, result 0 00:27:16.385 true 00:27:16.385 15:31:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:16.385 15:31:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:16.385 15:31:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:16.643 15:31:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:16.643 15:31:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:16.643 15:31:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:16.643 [2024-10-25 15:31:59.315840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:16.643 [2024-10-25 15:31:59.315884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:16.643 [2024-10-25 15:31:59.315899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:16.643 [2024-10-25 15:31:59.315910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.643 [2024-10-25 15:31:59.315935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:16.643 [2024-10-25 15:31:59.315946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:16.643 [2024-10-25 15:31:59.315956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:16.643 [2024-10-25 15:31:59.315965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.643 [2024-10-25 15:31:59.315985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:16.643 [2024-10-25 15:31:59.315995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:16.643 [2024-10-25 15:31:59.316005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:16.643 [2024-10-25 15:31:59.316014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:16.643 [2024-10-25 15:31:59.316071] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.221 ms, result 0 00:27:16.643 true 00:27:16.643 15:31:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:16.903 { 00:27:16.903 "name": "ftl", 00:27:16.903 "properties": [ 00:27:16.903 { 00:27:16.903 "name": "superblock_version", 00:27:16.903 "value": 5, 00:27:16.903 "read-only": true 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "name": "base_device", 00:27:16.903 "bands": [ 00:27:16.903 { 00:27:16.903 "id": 0, 00:27:16.903 "state": "FREE", 00:27:16.903 "validity": 0.0 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 1, 00:27:16.903 "state": "FREE", 00:27:16.903 "validity": 0.0 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 2, 00:27:16.903 "state": "FREE", 00:27:16.903 "validity": 0.0 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 3, 00:27:16.903 "state": "FREE", 00:27:16.903 "validity": 0.0 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 4, 00:27:16.903 "state": "FREE", 00:27:16.903 "validity": 0.0 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 5, 00:27:16.903 "state": "FREE", 00:27:16.903 "validity": 0.0 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 6, 00:27:16.903 "state": "FREE", 00:27:16.903 "validity": 0.0 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 7, 00:27:16.903 "state": "FREE", 00:27:16.903 "validity": 0.0 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 8, 00:27:16.903 "state": "FREE", 00:27:16.903 "validity": 0.0 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 9, 00:27:16.903 "state": "FREE", 00:27:16.903 "validity": 0.0 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 10, 00:27:16.903 "state": "FREE", 00:27:16.903 "validity": 0.0 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 11, 00:27:16.903 "state": "FREE", 00:27:16.903 "validity": 0.0 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 12, 00:27:16.903 "state": "FREE", 00:27:16.903 "validity": 0.0 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 13, 00:27:16.903 "state": "FREE", 00:27:16.903 "validity": 0.0 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 14, 00:27:16.903 "state": "FREE", 00:27:16.903 "validity": 0.0 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 15, 00:27:16.903 "state": "FREE", 00:27:16.903 "validity": 0.0 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 16, 00:27:16.903 "state": "FREE", 00:27:16.903 "validity": 0.0 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 17, 00:27:16.903 "state": "FREE", 00:27:16.903 "validity": 0.0 00:27:16.903 } 00:27:16.903 ], 00:27:16.903 "read-only": true 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "name": "cache_device", 00:27:16.903 "type": "bdev", 00:27:16.903 "chunks": [ 00:27:16.903 { 00:27:16.903 "id": 0, 00:27:16.903 "state": "INACTIVE", 00:27:16.903 "utilization": 0.0 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 1, 00:27:16.903 "state": "CLOSED", 00:27:16.903 "utilization": 1.0 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 2, 00:27:16.903 "state": "CLOSED", 00:27:16.903 "utilization": 1.0 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 3, 00:27:16.903 "state": "OPEN", 00:27:16.903 "utilization": 0.001953125 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "id": 4, 00:27:16.903 "state": "OPEN", 00:27:16.903 "utilization": 0.0 00:27:16.903 } 00:27:16.903 ], 00:27:16.903 "read-only": true 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "name": "verbose_mode", 00:27:16.903 "value": true, 00:27:16.903 "unit": "", 00:27:16.903 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:16.903 }, 00:27:16.903 { 00:27:16.903 "name": "prep_upgrade_on_shutdown", 00:27:16.903 "value": true, 00:27:16.903 "unit": "", 00:27:16.903 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:16.903 } 00:27:16.903 ] 00:27:16.903 } 00:27:16.903 15:31:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:16.903 15:31:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80518 ]] 00:27:16.903 15:31:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80518 00:27:16.903 15:31:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 80518 ']' 00:27:16.903 15:31:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 80518 00:27:16.903 15:31:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:16.903 15:31:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:16.903 15:31:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80518 00:27:16.903 killing process with pid 80518 00:27:16.903 15:31:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:16.903 15:31:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:16.903 15:31:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80518' 00:27:16.903 15:31:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 80518 00:27:16.903 15:31:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 80518 00:27:18.280 [2024-10-25 15:32:00.655610] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:18.280 [2024-10-25 15:32:00.675672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.280 [2024-10-25 15:32:00.675711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:18.280 [2024-10-25 15:32:00.675731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:18.280 [2024-10-25 15:32:00.675742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:18.280 [2024-10-25 15:32:00.675764] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:18.280 [2024-10-25 15:32:00.679852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:18.280 [2024-10-25 15:32:00.679880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:18.280 [2024-10-25 15:32:00.679892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.078 ms 00:27:18.280 [2024-10-25 15:32:00.679902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.398 [2024-10-25 15:32:07.902852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.398 [2024-10-25 15:32:07.902909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:26.398 [2024-10-25 15:32:07.902926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7234.645 ms 00:27:26.398 [2024-10-25 15:32:07.902937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.398 [2024-10-25 15:32:07.904147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.398 [2024-10-25 15:32:07.904191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:26.398 [2024-10-25 15:32:07.904204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.191 ms 00:27:26.398 [2024-10-25 15:32:07.904215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.398 [2024-10-25 15:32:07.905132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.398 [2024-10-25 15:32:07.905152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:26.398 [2024-10-25 15:32:07.905175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.885 ms 00:27:26.398 [2024-10-25 15:32:07.905194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.398 [2024-10-25 15:32:07.920410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.398 [2024-10-25 15:32:07.920444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:26.398 [2024-10-25 15:32:07.920457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.200 ms 00:27:26.398 [2024-10-25 15:32:07.920490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.398 [2024-10-25 15:32:07.929639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.398 [2024-10-25 15:32:07.929674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:26.398 [2024-10-25 15:32:07.929686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.127 ms 00:27:26.398 [2024-10-25 15:32:07.929712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.398 [2024-10-25 15:32:07.929806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.398 [2024-10-25 15:32:07.929819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:26.398 [2024-10-25 15:32:07.929830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:27:26.398 [2024-10-25 15:32:07.929840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.398 [2024-10-25 15:32:07.944228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.398 [2024-10-25 15:32:07.944260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:26.398 [2024-10-25 15:32:07.944272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.388 ms 00:27:26.398 [2024-10-25 15:32:07.944297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.398 [2024-10-25 15:32:07.958853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.398 [2024-10-25 15:32:07.958884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:26.398 [2024-10-25 15:32:07.958895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.545 ms 00:27:26.398 [2024-10-25 15:32:07.958919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.398 [2024-10-25 15:32:07.973314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.398 [2024-10-25 15:32:07.973346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:26.398 [2024-10-25 15:32:07.973357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.383 ms 00:27:26.398 [2024-10-25 15:32:07.973367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.398 [2024-10-25 15:32:07.987610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.398 [2024-10-25 15:32:07.987651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:26.398 [2024-10-25 15:32:07.987663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.192 ms 00:27:26.398 [2024-10-25 15:32:07.987672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.398 [2024-10-25 15:32:07.987706] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:26.398 [2024-10-25 15:32:07.987721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:26.398 [2024-10-25 15:32:07.987733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:26.398 [2024-10-25 15:32:07.987755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:26.398 [2024-10-25 15:32:07.987766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:26.398 [2024-10-25 15:32:07.987777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:26.398 [2024-10-25 15:32:07.987787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:26.398 [2024-10-25 15:32:07.987797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:26.398 [2024-10-25 15:32:07.987807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:26.398 [2024-10-25 15:32:07.987816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:26.398 [2024-10-25 15:32:07.987826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:26.398 [2024-10-25 15:32:07.987836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:26.398 [2024-10-25 15:32:07.987846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:26.398 [2024-10-25 15:32:07.987856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:26.398 [2024-10-25 15:32:07.987865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:26.398 [2024-10-25 15:32:07.987875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:26.398 [2024-10-25 15:32:07.987884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:26.398 [2024-10-25 15:32:07.987894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:26.398 [2024-10-25 15:32:07.987904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:26.398 [2024-10-25 15:32:07.987915] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:26.398 [2024-10-25 15:32:07.987925] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: ca70ed1e-89de-4b94-accc-9539f64b4874 00:27:26.398 [2024-10-25 15:32:07.987941] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:26.398 [2024-10-25 15:32:07.987950] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:26.398 [2024-10-25 15:32:07.987960] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:26.398 [2024-10-25 15:32:07.987970] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:26.398 [2024-10-25 15:32:07.987995] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:26.398 [2024-10-25 15:32:07.988005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:26.399 [2024-10-25 15:32:07.988015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:26.399 [2024-10-25 15:32:07.988024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:26.399 [2024-10-25 15:32:07.988033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:26.399 [2024-10-25 15:32:07.988042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.399 [2024-10-25 15:32:07.988056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:26.399 [2024-10-25 15:32:07.988070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.338 ms 00:27:26.399 [2024-10-25 15:32:07.988080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.399 [2024-10-25 15:32:08.007758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.399 [2024-10-25 15:32:08.007789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:26.399 [2024-10-25 15:32:08.007801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.667 ms 00:27:26.399 [2024-10-25 15:32:08.007828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.399 [2024-10-25 15:32:08.008376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.399 [2024-10-25 15:32:08.008389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:26.399 [2024-10-25 15:32:08.008399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.528 ms 00:27:26.399 [2024-10-25 15:32:08.008409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.399 [2024-10-25 15:32:08.072629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.399 [2024-10-25 15:32:08.072662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:26.399 [2024-10-25 15:32:08.072674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.399 [2024-10-25 15:32:08.072701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.399 [2024-10-25 15:32:08.072735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.399 [2024-10-25 15:32:08.072746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:26.399 [2024-10-25 15:32:08.072755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.399 [2024-10-25 15:32:08.072765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.399 [2024-10-25 15:32:08.072842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.399 [2024-10-25 15:32:08.072860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:26.399 [2024-10-25 15:32:08.072872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.399 [2024-10-25 15:32:08.072882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.399 [2024-10-25 15:32:08.072899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.399 [2024-10-25 15:32:08.072914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:26.399 [2024-10-25 15:32:08.072924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.399 [2024-10-25 15:32:08.072933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.399 [2024-10-25 15:32:08.191147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.399 [2024-10-25 15:32:08.191198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:26.399 [2024-10-25 15:32:08.191228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.399 [2024-10-25 15:32:08.191248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.399 [2024-10-25 15:32:08.287848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.399 [2024-10-25 15:32:08.287891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:26.399 [2024-10-25 15:32:08.287920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.399 [2024-10-25 15:32:08.287930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.399 [2024-10-25 15:32:08.288026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.399 [2024-10-25 15:32:08.288038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:26.399 [2024-10-25 15:32:08.288049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.399 [2024-10-25 15:32:08.288059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.399 [2024-10-25 15:32:08.288103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.399 [2024-10-25 15:32:08.288115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:26.399 [2024-10-25 15:32:08.288131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.399 [2024-10-25 15:32:08.288147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.399 [2024-10-25 15:32:08.288297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.399 [2024-10-25 15:32:08.288311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:26.399 [2024-10-25 15:32:08.288322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.399 [2024-10-25 15:32:08.288331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.399 [2024-10-25 15:32:08.288368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.399 [2024-10-25 15:32:08.288380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:26.399 [2024-10-25 15:32:08.288390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.399 [2024-10-25 15:32:08.288404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.399 [2024-10-25 15:32:08.288442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.399 [2024-10-25 15:32:08.288454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:26.399 [2024-10-25 15:32:08.288464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.399 [2024-10-25 15:32:08.288474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.399 [2024-10-25 15:32:08.288524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:26.399 [2024-10-25 15:32:08.288536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:26.399 [2024-10-25 15:32:08.288550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:26.399 [2024-10-25 15:32:08.288560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.399 [2024-10-25 15:32:08.288684] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7625.345 ms, result 0 00:27:28.933 15:32:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:28.933 15:32:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:28.933 15:32:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:28.933 15:32:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:28.933 15:32:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:28.933 15:32:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81096 00:27:28.933 15:32:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:28.933 15:32:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81096 00:27:28.933 15:32:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81096 ']' 00:27:28.933 15:32:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.933 15:32:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:28.933 15:32:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.933 15:32:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:28.933 15:32:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:28.933 15:32:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:28.933 [2024-10-25 15:32:11.567023] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:27:28.933 [2024-10-25 15:32:11.567167] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81096 ] 00:27:29.192 [2024-10-25 15:32:11.740595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.192 [2024-10-25 15:32:11.842650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.130 [2024-10-25 15:32:12.760529] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:30.130 [2024-10-25 15:32:12.760598] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:30.389 [2024-10-25 15:32:12.906801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.389 [2024-10-25 15:32:12.906848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:30.389 [2024-10-25 15:32:12.906880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:30.389 [2024-10-25 15:32:12.906891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.390 [2024-10-25 15:32:12.906941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.390 [2024-10-25 15:32:12.906954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:30.390 [2024-10-25 15:32:12.906965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:27:30.390 [2024-10-25 15:32:12.906975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.390 [2024-10-25 15:32:12.907004] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:30.390 [2024-10-25 15:32:12.908066] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:30.390 [2024-10-25 15:32:12.908103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.390 [2024-10-25 15:32:12.908114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:30.390 [2024-10-25 15:32:12.908125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.111 ms 00:27:30.390 [2024-10-25 15:32:12.908134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.390 [2024-10-25 15:32:12.909593] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:30.390 [2024-10-25 15:32:12.929130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.390 [2024-10-25 15:32:12.929171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:30.390 [2024-10-25 15:32:12.929192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.570 ms 00:27:30.390 [2024-10-25 15:32:12.929208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.390 [2024-10-25 15:32:12.929286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.390 [2024-10-25 15:32:12.929299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:30.390 [2024-10-25 15:32:12.929310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:30.390 [2024-10-25 15:32:12.929320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.390 [2024-10-25 15:32:12.936187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.390 [2024-10-25 15:32:12.936226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:30.390 [2024-10-25 15:32:12.936254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.798 ms 00:27:30.390 [2024-10-25 15:32:12.936264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.390 [2024-10-25 15:32:12.936325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.390 [2024-10-25 15:32:12.936339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:30.390 [2024-10-25 15:32:12.936350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:27:30.390 [2024-10-25 15:32:12.936360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.390 [2024-10-25 15:32:12.936403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.390 [2024-10-25 15:32:12.936415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:30.390 [2024-10-25 15:32:12.936426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:30.390 [2024-10-25 15:32:12.936439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.390 [2024-10-25 15:32:12.936465] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:30.390 [2024-10-25 15:32:12.941344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.390 [2024-10-25 15:32:12.941381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:30.390 [2024-10-25 15:32:12.941408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.892 ms 00:27:30.390 [2024-10-25 15:32:12.941419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.390 [2024-10-25 15:32:12.941450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.390 [2024-10-25 15:32:12.941461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:30.390 [2024-10-25 15:32:12.941471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:30.390 [2024-10-25 15:32:12.941481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.390 [2024-10-25 15:32:12.941537] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:30.390 [2024-10-25 15:32:12.941561] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:30.390 [2024-10-25 15:32:12.941599] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:30.390 [2024-10-25 15:32:12.941617] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:30.390 [2024-10-25 15:32:12.941705] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:30.390 [2024-10-25 15:32:12.941718] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:30.390 [2024-10-25 15:32:12.941731] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:30.390 [2024-10-25 15:32:12.941744] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:30.390 [2024-10-25 15:32:12.941756] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:30.390 [2024-10-25 15:32:12.941767] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:30.390 [2024-10-25 15:32:12.941780] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:30.390 [2024-10-25 15:32:12.941790] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:30.390 [2024-10-25 15:32:12.941799] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:30.390 [2024-10-25 15:32:12.941809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.390 [2024-10-25 15:32:12.941819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:30.390 [2024-10-25 15:32:12.941830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.275 ms 00:27:30.390 [2024-10-25 15:32:12.941839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.390 [2024-10-25 15:32:12.941912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.390 [2024-10-25 15:32:12.941922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:30.390 [2024-10-25 15:32:12.941933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:27:30.390 [2024-10-25 15:32:12.941946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.390 [2024-10-25 15:32:12.942036] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:30.390 [2024-10-25 15:32:12.942050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:30.390 [2024-10-25 15:32:12.942060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:30.390 [2024-10-25 15:32:12.942071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.390 [2024-10-25 15:32:12.942081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:30.390 [2024-10-25 15:32:12.942090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:30.390 [2024-10-25 15:32:12.942100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:30.390 [2024-10-25 15:32:12.942109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:30.390 [2024-10-25 15:32:12.942118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:30.390 [2024-10-25 15:32:12.942127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.390 [2024-10-25 15:32:12.942141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:30.390 [2024-10-25 15:32:12.942150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:30.390 [2024-10-25 15:32:12.942159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.390 [2024-10-25 15:32:12.942168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:30.390 [2024-10-25 15:32:12.942177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:30.390 [2024-10-25 15:32:12.942186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.390 [2024-10-25 15:32:12.942212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:30.390 [2024-10-25 15:32:12.942222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:30.390 [2024-10-25 15:32:12.942231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.390 [2024-10-25 15:32:12.942241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:30.390 [2024-10-25 15:32:12.942250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:30.390 [2024-10-25 15:32:12.942259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:30.390 [2024-10-25 15:32:12.942268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:30.390 [2024-10-25 15:32:12.942277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:30.390 [2024-10-25 15:32:12.942286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:30.390 [2024-10-25 15:32:12.942305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:30.390 [2024-10-25 15:32:12.942314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:30.390 [2024-10-25 15:32:12.942323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:30.390 [2024-10-25 15:32:12.942332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:30.390 [2024-10-25 15:32:12.942341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:30.390 [2024-10-25 15:32:12.942350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:30.390 [2024-10-25 15:32:12.942360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:30.390 [2024-10-25 15:32:12.942368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:30.390 [2024-10-25 15:32:12.942377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.390 [2024-10-25 15:32:12.942386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:30.390 [2024-10-25 15:32:12.942395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:30.390 [2024-10-25 15:32:12.942403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.390 [2024-10-25 15:32:12.942412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:30.390 [2024-10-25 15:32:12.942422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:30.390 [2024-10-25 15:32:12.942431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.390 [2024-10-25 15:32:12.942440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:30.391 [2024-10-25 15:32:12.942449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:30.391 [2024-10-25 15:32:12.942459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.391 [2024-10-25 15:32:12.942468] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:30.391 [2024-10-25 15:32:12.942478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:30.391 [2024-10-25 15:32:12.942488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:30.391 [2024-10-25 15:32:12.942497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:30.391 [2024-10-25 15:32:12.942507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:30.391 [2024-10-25 15:32:12.942516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:30.391 [2024-10-25 15:32:12.942525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:30.391 [2024-10-25 15:32:12.942535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:30.391 [2024-10-25 15:32:12.942544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:30.391 [2024-10-25 15:32:12.942553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:30.391 [2024-10-25 15:32:12.942564] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:30.391 [2024-10-25 15:32:12.942635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:30.391 [2024-10-25 15:32:12.942646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:30.391 [2024-10-25 15:32:12.942656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:30.391 [2024-10-25 15:32:12.942666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:30.391 [2024-10-25 15:32:12.942676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:30.391 [2024-10-25 15:32:12.942686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:30.391 [2024-10-25 15:32:12.942696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:30.391 [2024-10-25 15:32:12.942707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:30.391 [2024-10-25 15:32:12.942717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:30.391 [2024-10-25 15:32:12.942728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:30.391 [2024-10-25 15:32:12.942738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:30.391 [2024-10-25 15:32:12.942748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:30.391 [2024-10-25 15:32:12.942758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:30.391 [2024-10-25 15:32:12.942768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:30.391 [2024-10-25 15:32:12.942779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:30.391 [2024-10-25 15:32:12.942789] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:30.391 [2024-10-25 15:32:12.942800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:30.391 [2024-10-25 15:32:12.942811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:30.391 [2024-10-25 15:32:12.942821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:30.391 [2024-10-25 15:32:12.942831] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:30.391 [2024-10-25 15:32:12.942843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:30.391 [2024-10-25 15:32:12.942854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:30.391 [2024-10-25 15:32:12.942864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:30.391 [2024-10-25 15:32:12.942875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.873 ms 00:27:30.391 [2024-10-25 15:32:12.942884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:30.391 [2024-10-25 15:32:12.942930] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:30.391 [2024-10-25 15:32:12.942944] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:33.679 [2024-10-25 15:32:16.346898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.679 [2024-10-25 15:32:16.346971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:33.679 [2024-10-25 15:32:16.346989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3409.492 ms 00:27:33.679 [2024-10-25 15:32:16.346999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.679 [2024-10-25 15:32:16.388421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.679 [2024-10-25 15:32:16.388465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:33.679 [2024-10-25 15:32:16.388498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.426 ms 00:27:33.679 [2024-10-25 15:32:16.388509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.679 [2024-10-25 15:32:16.388605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.680 [2024-10-25 15:32:16.388618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:33.680 [2024-10-25 15:32:16.388644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:33.680 [2024-10-25 15:32:16.388654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.939 [2024-10-25 15:32:16.435479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.939 [2024-10-25 15:32:16.435522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:33.939 [2024-10-25 15:32:16.435536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.854 ms 00:27:33.939 [2024-10-25 15:32:16.435547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.939 [2024-10-25 15:32:16.435600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.939 [2024-10-25 15:32:16.435611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:33.939 [2024-10-25 15:32:16.435623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:33.939 [2024-10-25 15:32:16.435633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.939 [2024-10-25 15:32:16.436121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.939 [2024-10-25 15:32:16.436144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:33.939 [2024-10-25 15:32:16.436156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.416 ms 00:27:33.939 [2024-10-25 15:32:16.436167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.939 [2024-10-25 15:32:16.436225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.939 [2024-10-25 15:32:16.436237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:33.939 [2024-10-25 15:32:16.436248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:33.939 [2024-10-25 15:32:16.436258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.939 [2024-10-25 15:32:16.456739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.939 [2024-10-25 15:32:16.456780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:33.939 [2024-10-25 15:32:16.456795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.491 ms 00:27:33.939 [2024-10-25 15:32:16.456805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.939 [2024-10-25 15:32:16.475963] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:33.939 [2024-10-25 15:32:16.476006] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:33.939 [2024-10-25 15:32:16.476021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.939 [2024-10-25 15:32:16.476033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:33.939 [2024-10-25 15:32:16.476045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.123 ms 00:27:33.939 [2024-10-25 15:32:16.476055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.939 [2024-10-25 15:32:16.495395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.939 [2024-10-25 15:32:16.495435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:33.939 [2024-10-25 15:32:16.495448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.325 ms 00:27:33.939 [2024-10-25 15:32:16.495459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.939 [2024-10-25 15:32:16.513638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.939 [2024-10-25 15:32:16.513673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:33.939 [2024-10-25 15:32:16.513702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.161 ms 00:27:33.939 [2024-10-25 15:32:16.513713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.939 [2024-10-25 15:32:16.531639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.939 [2024-10-25 15:32:16.531680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:33.939 [2024-10-25 15:32:16.531709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.914 ms 00:27:33.939 [2024-10-25 15:32:16.531719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.939 [2024-10-25 15:32:16.532518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.939 [2024-10-25 15:32:16.532549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:33.939 [2024-10-25 15:32:16.532565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.693 ms 00:27:33.939 [2024-10-25 15:32:16.532575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.939 [2024-10-25 15:32:16.632260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.939 [2024-10-25 15:32:16.632326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:33.939 [2024-10-25 15:32:16.632342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 99.821 ms 00:27:33.939 [2024-10-25 15:32:16.632352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.939 [2024-10-25 15:32:16.642830] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:33.939 [2024-10-25 15:32:16.643600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.939 [2024-10-25 15:32:16.643630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:33.939 [2024-10-25 15:32:16.643643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.201 ms 00:27:33.939 [2024-10-25 15:32:16.643653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.939 [2024-10-25 15:32:16.643730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.939 [2024-10-25 15:32:16.643744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:33.939 [2024-10-25 15:32:16.643758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:33.939 [2024-10-25 15:32:16.643768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.939 [2024-10-25 15:32:16.643829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.939 [2024-10-25 15:32:16.643841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:33.939 [2024-10-25 15:32:16.643852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:33.939 [2024-10-25 15:32:16.643862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.939 [2024-10-25 15:32:16.643885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.939 [2024-10-25 15:32:16.643895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:33.939 [2024-10-25 15:32:16.643906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:33.940 [2024-10-25 15:32:16.643919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.940 [2024-10-25 15:32:16.643955] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:33.940 [2024-10-25 15:32:16.643968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.940 [2024-10-25 15:32:16.643978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:33.940 [2024-10-25 15:32:16.643988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:33.940 [2024-10-25 15:32:16.643999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.198 [2024-10-25 15:32:16.679939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.198 [2024-10-25 15:32:16.679981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:34.198 [2024-10-25 15:32:16.680018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.976 ms 00:27:34.198 [2024-10-25 15:32:16.680029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.198 [2024-10-25 15:32:16.680114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.198 [2024-10-25 15:32:16.680128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:34.198 [2024-10-25 15:32:16.680139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:27:34.198 [2024-10-25 15:32:16.680148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.198 [2024-10-25 15:32:16.681260] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3780.115 ms, result 0 00:27:34.198 [2024-10-25 15:32:16.696306] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.198 [2024-10-25 15:32:16.712291] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:34.198 [2024-10-25 15:32:16.721127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:34.198 15:32:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:34.198 15:32:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:34.198 15:32:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:34.198 15:32:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:34.198 15:32:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:34.457 [2024-10-25 15:32:16.952790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.457 [2024-10-25 15:32:16.952832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:34.457 [2024-10-25 15:32:16.952847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:34.457 [2024-10-25 15:32:16.952858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.457 [2024-10-25 15:32:16.952887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.457 [2024-10-25 15:32:16.952898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:34.457 [2024-10-25 15:32:16.952908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:34.457 [2024-10-25 15:32:16.952918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.457 [2024-10-25 15:32:16.952939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.457 [2024-10-25 15:32:16.952950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:34.457 [2024-10-25 15:32:16.952960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:34.457 [2024-10-25 15:32:16.952970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.457 [2024-10-25 15:32:16.953029] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.234 ms, result 0 00:27:34.457 true 00:27:34.457 15:32:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:34.457 { 00:27:34.457 "name": "ftl", 00:27:34.457 "properties": [ 00:27:34.457 { 00:27:34.457 "name": "superblock_version", 00:27:34.457 "value": 5, 00:27:34.457 "read-only": true 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "name": "base_device", 00:27:34.457 "bands": [ 00:27:34.457 { 00:27:34.457 "id": 0, 00:27:34.457 "state": "CLOSED", 00:27:34.457 "validity": 1.0 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 1, 00:27:34.457 "state": "CLOSED", 00:27:34.457 "validity": 1.0 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 2, 00:27:34.457 "state": "CLOSED", 00:27:34.457 "validity": 0.007843137254901933 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 3, 00:27:34.457 "state": "FREE", 00:27:34.457 "validity": 0.0 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 4, 00:27:34.457 "state": "FREE", 00:27:34.457 "validity": 0.0 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 5, 00:27:34.457 "state": "FREE", 00:27:34.457 "validity": 0.0 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 6, 00:27:34.457 "state": "FREE", 00:27:34.457 "validity": 0.0 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 7, 00:27:34.457 "state": "FREE", 00:27:34.457 "validity": 0.0 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 8, 00:27:34.457 "state": "FREE", 00:27:34.457 "validity": 0.0 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 9, 00:27:34.457 "state": "FREE", 00:27:34.457 "validity": 0.0 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 10, 00:27:34.457 "state": "FREE", 00:27:34.457 "validity": 0.0 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 11, 00:27:34.457 "state": "FREE", 00:27:34.457 "validity": 0.0 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 12, 00:27:34.457 "state": "FREE", 00:27:34.457 "validity": 0.0 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 13, 00:27:34.457 "state": "FREE", 00:27:34.457 "validity": 0.0 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 14, 00:27:34.457 "state": "FREE", 00:27:34.457 "validity": 0.0 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 15, 00:27:34.457 "state": "FREE", 00:27:34.457 "validity": 0.0 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 16, 00:27:34.457 "state": "FREE", 00:27:34.457 "validity": 0.0 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 17, 00:27:34.457 "state": "FREE", 00:27:34.457 "validity": 0.0 00:27:34.457 } 00:27:34.457 ], 00:27:34.457 "read-only": true 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "name": "cache_device", 00:27:34.457 "type": "bdev", 00:27:34.457 "chunks": [ 00:27:34.457 { 00:27:34.457 "id": 0, 00:27:34.457 "state": "INACTIVE", 00:27:34.457 "utilization": 0.0 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 1, 00:27:34.457 "state": "OPEN", 00:27:34.457 "utilization": 0.0 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 2, 00:27:34.457 "state": "OPEN", 00:27:34.457 "utilization": 0.0 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 3, 00:27:34.457 "state": "FREE", 00:27:34.457 "utilization": 0.0 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "id": 4, 00:27:34.457 "state": "FREE", 00:27:34.457 "utilization": 0.0 00:27:34.457 } 00:27:34.457 ], 00:27:34.457 "read-only": true 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "name": "verbose_mode", 00:27:34.457 "value": true, 00:27:34.457 "unit": "", 00:27:34.457 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:34.457 }, 00:27:34.457 { 00:27:34.457 "name": "prep_upgrade_on_shutdown", 00:27:34.457 "value": false, 00:27:34.457 "unit": "", 00:27:34.457 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:34.457 } 00:27:34.457 ] 00:27:34.457 } 00:27:34.457 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:34.457 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:34.457 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:34.716 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:34.716 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:34.716 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:34.716 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:34.716 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:34.974 Validate MD5 checksum, iteration 1 00:27:34.974 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:34.974 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:34.974 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:34.974 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:34.974 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:34.974 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:34.974 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:34.974 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:34.974 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:34.974 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:34.974 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:34.974 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:34.974 15:32:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:34.974 [2024-10-25 15:32:17.684928] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:27:34.974 [2024-10-25 15:32:17.685039] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81180 ] 00:27:35.233 [2024-10-25 15:32:17.861574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.490 [2024-10-25 15:32:17.970242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.391  [2024-10-25T15:32:20.119Z] Copying: 708/1024 [MB] (708 MBps) [2024-10-25T15:32:22.064Z] Copying: 1024/1024 [MB] (average 706 MBps) 00:27:39.336 00:27:39.336 15:32:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:39.336 15:32:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:40.713 15:32:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:40.713 Validate MD5 checksum, iteration 2 00:27:40.713 15:32:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c6bab96360266500229e38b9afc68a9b 00:27:40.713 15:32:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c6bab96360266500229e38b9afc68a9b != \c\6\b\a\b\9\6\3\6\0\2\6\6\5\0\0\2\2\9\e\3\8\b\9\a\f\c\6\8\a\9\b ]] 00:27:40.713 15:32:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:40.713 15:32:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:40.713 15:32:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:40.713 15:32:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:40.713 15:32:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:40.713 15:32:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:40.713 15:32:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:40.713 15:32:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:40.713 15:32:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:40.713 [2024-10-25 15:32:23.406039] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:27:40.713 [2024-10-25 15:32:23.406168] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81237 ] 00:27:40.971 [2024-10-25 15:32:23.584397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.971 [2024-10-25 15:32:23.698598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.882  [2024-10-25T15:32:25.867Z] Copying: 715/1024 [MB] (715 MBps) [2024-10-25T15:32:30.058Z] Copying: 1024/1024 [MB] (average 714 MBps) 00:27:47.330 00:27:47.330 15:32:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:47.330 15:32:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2fbea36c5e46d98965d65389209f432e 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2fbea36c5e46d98965d65389209f432e != \2\f\b\e\a\3\6\c\5\e\4\6\d\9\8\9\6\5\d\6\5\3\8\9\2\0\9\f\4\3\2\e ]] 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81096 ]] 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81096 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81326 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81326 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81326 ']' 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:48.707 15:32:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:48.707 [2024-10-25 15:32:31.336916] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:27:48.707 [2024-10-25 15:32:31.337052] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81326 ] 00:27:48.707 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 81096 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:48.965 [2024-10-25 15:32:31.514513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.965 [2024-10-25 15:32:31.622050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.903 [2024-10-25 15:32:32.552749] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:49.903 [2024-10-25 15:32:32.552816] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:50.163 [2024-10-25 15:32:32.698989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.163 [2024-10-25 15:32:32.699033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:50.163 [2024-10-25 15:32:32.699055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:50.163 [2024-10-25 15:32:32.699066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.163 [2024-10-25 15:32:32.699116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.163 [2024-10-25 15:32:32.699128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:50.163 [2024-10-25 15:32:32.699139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:50.163 [2024-10-25 15:32:32.699148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.163 [2024-10-25 15:32:32.699192] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:50.163 [2024-10-25 15:32:32.700145] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:50.163 [2024-10-25 15:32:32.700172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.163 [2024-10-25 15:32:32.700194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:50.163 [2024-10-25 15:32:32.700205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.007 ms 00:27:50.163 [2024-10-25 15:32:32.700215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.163 [2024-10-25 15:32:32.700560] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:50.163 [2024-10-25 15:32:32.724025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.163 [2024-10-25 15:32:32.724062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:50.163 [2024-10-25 15:32:32.724077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.503 ms 00:27:50.163 [2024-10-25 15:32:32.724087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.163 [2024-10-25 15:32:32.738546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.163 [2024-10-25 15:32:32.738583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:50.163 [2024-10-25 15:32:32.738598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:50.163 [2024-10-25 15:32:32.738608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.163 [2024-10-25 15:32:32.739085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.163 [2024-10-25 15:32:32.739101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:50.163 [2024-10-25 15:32:32.739112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.382 ms 00:27:50.163 [2024-10-25 15:32:32.739122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.163 [2024-10-25 15:32:32.739180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.163 [2024-10-25 15:32:32.739213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:50.163 [2024-10-25 15:32:32.739224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:27:50.163 [2024-10-25 15:32:32.739233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.163 [2024-10-25 15:32:32.739260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.163 [2024-10-25 15:32:32.739281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:50.163 [2024-10-25 15:32:32.739292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:50.163 [2024-10-25 15:32:32.739302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.163 [2024-10-25 15:32:32.739326] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:50.163 [2024-10-25 15:32:32.743208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.163 [2024-10-25 15:32:32.743236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:50.163 [2024-10-25 15:32:32.743248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.892 ms 00:27:50.163 [2024-10-25 15:32:32.743258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.164 [2024-10-25 15:32:32.743283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.164 [2024-10-25 15:32:32.743297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:50.164 [2024-10-25 15:32:32.743307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:50.164 [2024-10-25 15:32:32.743317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.164 [2024-10-25 15:32:32.743355] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:50.164 [2024-10-25 15:32:32.743377] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:50.164 [2024-10-25 15:32:32.743411] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:50.164 [2024-10-25 15:32:32.743428] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:50.164 [2024-10-25 15:32:32.743518] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:50.164 [2024-10-25 15:32:32.743530] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:50.164 [2024-10-25 15:32:32.743543] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:50.164 [2024-10-25 15:32:32.743555] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:50.164 [2024-10-25 15:32:32.743566] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:50.164 [2024-10-25 15:32:32.743577] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:50.164 [2024-10-25 15:32:32.743587] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:50.164 [2024-10-25 15:32:32.743596] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:50.164 [2024-10-25 15:32:32.743605] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:50.164 [2024-10-25 15:32:32.743615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.164 [2024-10-25 15:32:32.743625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:50.164 [2024-10-25 15:32:32.743639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.262 ms 00:27:50.164 [2024-10-25 15:32:32.743649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.164 [2024-10-25 15:32:32.743720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.164 [2024-10-25 15:32:32.743730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:50.164 [2024-10-25 15:32:32.743739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:27:50.164 [2024-10-25 15:32:32.743748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.164 [2024-10-25 15:32:32.743835] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:50.164 [2024-10-25 15:32:32.743846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:50.164 [2024-10-25 15:32:32.743857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:50.164 [2024-10-25 15:32:32.743870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.164 [2024-10-25 15:32:32.743882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:50.164 [2024-10-25 15:32:32.743891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:50.164 [2024-10-25 15:32:32.743901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:50.164 [2024-10-25 15:32:32.743910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:50.164 [2024-10-25 15:32:32.743919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:50.164 [2024-10-25 15:32:32.743927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.164 [2024-10-25 15:32:32.743937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:50.164 [2024-10-25 15:32:32.743945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:50.164 [2024-10-25 15:32:32.743954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.164 [2024-10-25 15:32:32.743963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:50.164 [2024-10-25 15:32:32.743972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:50.164 [2024-10-25 15:32:32.743981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.164 [2024-10-25 15:32:32.743989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:50.164 [2024-10-25 15:32:32.743998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:50.164 [2024-10-25 15:32:32.744007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.164 [2024-10-25 15:32:32.744016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:50.164 [2024-10-25 15:32:32.744024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:50.164 [2024-10-25 15:32:32.744033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:50.164 [2024-10-25 15:32:32.744042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:50.164 [2024-10-25 15:32:32.744061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:50.164 [2024-10-25 15:32:32.744071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:50.164 [2024-10-25 15:32:32.744080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:50.164 [2024-10-25 15:32:32.744089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:50.164 [2024-10-25 15:32:32.744098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:50.164 [2024-10-25 15:32:32.744107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:50.164 [2024-10-25 15:32:32.744116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:50.164 [2024-10-25 15:32:32.744124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:50.164 [2024-10-25 15:32:32.744133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:50.164 [2024-10-25 15:32:32.744142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:50.164 [2024-10-25 15:32:32.744151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.164 [2024-10-25 15:32:32.744161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:50.164 [2024-10-25 15:32:32.744169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:50.164 [2024-10-25 15:32:32.744197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.164 [2024-10-25 15:32:32.744206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:50.164 [2024-10-25 15:32:32.744216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:50.164 [2024-10-25 15:32:32.744224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.164 [2024-10-25 15:32:32.744233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:50.164 [2024-10-25 15:32:32.744242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:50.164 [2024-10-25 15:32:32.744251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.164 [2024-10-25 15:32:32.744260] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:50.164 [2024-10-25 15:32:32.744270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:50.164 [2024-10-25 15:32:32.744279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:50.164 [2024-10-25 15:32:32.744289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:50.164 [2024-10-25 15:32:32.744298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:50.164 [2024-10-25 15:32:32.744308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:50.164 [2024-10-25 15:32:32.744317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:50.164 [2024-10-25 15:32:32.744326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:50.164 [2024-10-25 15:32:32.744334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:50.164 [2024-10-25 15:32:32.744344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:50.164 [2024-10-25 15:32:32.744354] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:50.164 [2024-10-25 15:32:32.744365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:50.164 [2024-10-25 15:32:32.744376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:50.164 [2024-10-25 15:32:32.744386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:50.164 [2024-10-25 15:32:32.744396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:50.164 [2024-10-25 15:32:32.744406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:50.164 [2024-10-25 15:32:32.744416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:50.164 [2024-10-25 15:32:32.744426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:50.164 [2024-10-25 15:32:32.744436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:50.164 [2024-10-25 15:32:32.744446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:50.164 [2024-10-25 15:32:32.744456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:50.164 [2024-10-25 15:32:32.744467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:50.165 [2024-10-25 15:32:32.744477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:50.165 [2024-10-25 15:32:32.744487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:50.165 [2024-10-25 15:32:32.744496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:50.165 [2024-10-25 15:32:32.744508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:50.165 [2024-10-25 15:32:32.744518] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:50.165 [2024-10-25 15:32:32.744529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:50.165 [2024-10-25 15:32:32.744540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:50.165 [2024-10-25 15:32:32.744550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:50.165 [2024-10-25 15:32:32.744560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:50.165 [2024-10-25 15:32:32.744570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:50.165 [2024-10-25 15:32:32.744581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.165 [2024-10-25 15:32:32.744595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:50.165 [2024-10-25 15:32:32.744604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.801 ms 00:27:50.165 [2024-10-25 15:32:32.744614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.165 [2024-10-25 15:32:32.781106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.165 [2024-10-25 15:32:32.781139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:50.165 [2024-10-25 15:32:32.781152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.501 ms 00:27:50.165 [2024-10-25 15:32:32.781162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.165 [2024-10-25 15:32:32.781231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.165 [2024-10-25 15:32:32.781242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:50.165 [2024-10-25 15:32:32.781254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:50.165 [2024-10-25 15:32:32.781264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.165 [2024-10-25 15:32:32.827437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.165 [2024-10-25 15:32:32.827469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:50.165 [2024-10-25 15:32:32.827481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.191 ms 00:27:50.165 [2024-10-25 15:32:32.827507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.165 [2024-10-25 15:32:32.827544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.165 [2024-10-25 15:32:32.827555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:50.165 [2024-10-25 15:32:32.827566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:50.165 [2024-10-25 15:32:32.827576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.165 [2024-10-25 15:32:32.827708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.165 [2024-10-25 15:32:32.827720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:50.165 [2024-10-25 15:32:32.827730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:27:50.165 [2024-10-25 15:32:32.827740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.165 [2024-10-25 15:32:32.827780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.165 [2024-10-25 15:32:32.827791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:50.165 [2024-10-25 15:32:32.827801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:50.165 [2024-10-25 15:32:32.827810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.165 [2024-10-25 15:32:32.847719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.165 [2024-10-25 15:32:32.847751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:50.165 [2024-10-25 15:32:32.847764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.915 ms 00:27:50.165 [2024-10-25 15:32:32.847793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.165 [2024-10-25 15:32:32.847925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.165 [2024-10-25 15:32:32.847949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:50.165 [2024-10-25 15:32:32.847961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:50.165 [2024-10-25 15:32:32.847970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.165 [2024-10-25 15:32:32.885244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.165 [2024-10-25 15:32:32.885279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:50.165 [2024-10-25 15:32:32.885294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.307 ms 00:27:50.165 [2024-10-25 15:32:32.885304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.424 [2024-10-25 15:32:32.899659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.424 [2024-10-25 15:32:32.899691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:50.424 [2024-10-25 15:32:32.899713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.593 ms 00:27:50.424 [2024-10-25 15:32:32.899722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.424 [2024-10-25 15:32:32.985649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.424 [2024-10-25 15:32:32.985697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:50.424 [2024-10-25 15:32:32.985719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 86.004 ms 00:27:50.424 [2024-10-25 15:32:32.985729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.424 [2024-10-25 15:32:32.985892] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:50.424 [2024-10-25 15:32:32.986000] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:50.424 [2024-10-25 15:32:32.986125] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:50.424 [2024-10-25 15:32:32.986242] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:50.424 [2024-10-25 15:32:32.986256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.424 [2024-10-25 15:32:32.986266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:50.424 [2024-10-25 15:32:32.986277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.472 ms 00:27:50.424 [2024-10-25 15:32:32.986287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.424 [2024-10-25 15:32:32.986377] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:50.424 [2024-10-25 15:32:32.986397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.424 [2024-10-25 15:32:32.986411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:50.424 [2024-10-25 15:32:32.986422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:50.424 [2024-10-25 15:32:32.986432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.424 [2024-10-25 15:32:33.008693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.424 [2024-10-25 15:32:33.008735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:50.424 [2024-10-25 15:32:33.008764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.273 ms 00:27:50.424 [2024-10-25 15:32:33.008775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.424 [2024-10-25 15:32:33.022345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.424 [2024-10-25 15:32:33.022382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:50.424 [2024-10-25 15:32:33.022394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:50.424 [2024-10-25 15:32:33.022420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.424 [2024-10-25 15:32:33.022520] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:27:50.424 [2024-10-25 15:32:33.022712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.424 [2024-10-25 15:32:33.022723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:50.424 [2024-10-25 15:32:33.022734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.194 ms 00:27:50.424 [2024-10-25 15:32:33.022744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.008 [2024-10-25 15:32:33.579403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.008 [2024-10-25 15:32:33.579457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:51.008 [2024-10-25 15:32:33.579475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 556.435 ms 00:27:51.008 [2024-10-25 15:32:33.579486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.008 [2024-10-25 15:32:33.585025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.008 [2024-10-25 15:32:33.585062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:51.008 [2024-10-25 15:32:33.585075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.081 ms 00:27:51.008 [2024-10-25 15:32:33.585085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.008 [2024-10-25 15:32:33.585535] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:27:51.008 [2024-10-25 15:32:33.585564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.008 [2024-10-25 15:32:33.585575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:51.008 [2024-10-25 15:32:33.585587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.442 ms 00:27:51.008 [2024-10-25 15:32:33.585597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.008 [2024-10-25 15:32:33.585629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.008 [2024-10-25 15:32:33.585641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:51.008 [2024-10-25 15:32:33.585652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:51.008 [2024-10-25 15:32:33.585662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.008 [2024-10-25 15:32:33.585702] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 564.103 ms, result 0 00:27:51.008 [2024-10-25 15:32:33.585744] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:27:51.008 [2024-10-25 15:32:33.585822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.008 [2024-10-25 15:32:33.585832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:51.008 [2024-10-25 15:32:33.585841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:27:51.008 [2024-10-25 15:32:33.585850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.597 [2024-10-25 15:32:34.152314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.597 [2024-10-25 15:32:34.152376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:51.597 [2024-10-25 15:32:34.152393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 566.322 ms 00:27:51.597 [2024-10-25 15:32:34.152404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.597 [2024-10-25 15:32:34.157969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.597 [2024-10-25 15:32:34.158007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:51.597 [2024-10-25 15:32:34.158019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.181 ms 00:27:51.597 [2024-10-25 15:32:34.158030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.597 [2024-10-25 15:32:34.158557] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:27:51.597 [2024-10-25 15:32:34.158586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.597 [2024-10-25 15:32:34.158596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:51.597 [2024-10-25 15:32:34.158607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.526 ms 00:27:51.597 [2024-10-25 15:32:34.158617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.597 [2024-10-25 15:32:34.158649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.597 [2024-10-25 15:32:34.158661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:51.597 [2024-10-25 15:32:34.158671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:51.597 [2024-10-25 15:32:34.158680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.597 [2024-10-25 15:32:34.158718] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 573.901 ms, result 0 00:27:51.597 [2024-10-25 15:32:34.158758] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:51.597 [2024-10-25 15:32:34.158771] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:51.597 [2024-10-25 15:32:34.158784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.597 [2024-10-25 15:32:34.158794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:51.597 [2024-10-25 15:32:34.158806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1138.134 ms 00:27:51.597 [2024-10-25 15:32:34.158816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.597 [2024-10-25 15:32:34.158845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.598 [2024-10-25 15:32:34.158857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:51.598 [2024-10-25 15:32:34.158871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:51.598 [2024-10-25 15:32:34.158881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.598 [2024-10-25 15:32:34.170259] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:51.598 [2024-10-25 15:32:34.170409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.598 [2024-10-25 15:32:34.170432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:51.598 [2024-10-25 15:32:34.170444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.529 ms 00:27:51.598 [2024-10-25 15:32:34.170454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.598 [2024-10-25 15:32:34.171037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.598 [2024-10-25 15:32:34.171066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:27:51.598 [2024-10-25 15:32:34.171082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.512 ms 00:27:51.598 [2024-10-25 15:32:34.171092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.598 [2024-10-25 15:32:34.173121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.598 [2024-10-25 15:32:34.173145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:51.598 [2024-10-25 15:32:34.173157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.015 ms 00:27:51.598 [2024-10-25 15:32:34.173167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.598 [2024-10-25 15:32:34.173215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.598 [2024-10-25 15:32:34.173227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:27:51.598 [2024-10-25 15:32:34.173238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:51.598 [2024-10-25 15:32:34.173252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.598 [2024-10-25 15:32:34.173349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.598 [2024-10-25 15:32:34.173361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:51.598 [2024-10-25 15:32:34.173372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:51.598 [2024-10-25 15:32:34.173381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.598 [2024-10-25 15:32:34.173403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.598 [2024-10-25 15:32:34.173414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:51.598 [2024-10-25 15:32:34.173424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:51.598 [2024-10-25 15:32:34.173434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.598 [2024-10-25 15:32:34.173462] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:51.598 [2024-10-25 15:32:34.173476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.598 [2024-10-25 15:32:34.173486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:51.598 [2024-10-25 15:32:34.173496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:51.598 [2024-10-25 15:32:34.173506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.598 [2024-10-25 15:32:34.173559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.598 [2024-10-25 15:32:34.173570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:51.598 [2024-10-25 15:32:34.173581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:51.598 [2024-10-25 15:32:34.173591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.598 [2024-10-25 15:32:34.174491] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1477.451 ms, result 0 00:27:51.598 [2024-10-25 15:32:34.186825] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.598 [2024-10-25 15:32:34.202807] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:51.598 [2024-10-25 15:32:34.212191] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:51.598 15:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:51.598 15:32:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:51.598 15:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:51.598 15:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:51.598 15:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:51.598 15:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:51.598 15:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:51.598 15:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:51.598 Validate MD5 checksum, iteration 1 00:27:51.598 15:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:51.598 15:32:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:51.598 15:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:51.598 15:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:51.598 15:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:51.598 15:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:51.598 15:32:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:51.857 [2024-10-25 15:32:34.349811] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:27:51.858 [2024-10-25 15:32:34.349925] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81361 ] 00:27:51.858 [2024-10-25 15:32:34.529454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.117 [2024-10-25 15:32:34.645582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.024  [2024-10-25T15:32:36.752Z] Copying: 710/1024 [MB] (710 MBps) [2024-10-25T15:32:40.039Z] Copying: 1024/1024 [MB] (average 708 MBps) 00:27:57.311 00:27:57.311 15:32:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:57.311 15:32:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:58.685 15:32:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:58.685 Validate MD5 checksum, iteration 2 00:27:58.686 15:32:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c6bab96360266500229e38b9afc68a9b 00:27:58.686 15:32:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c6bab96360266500229e38b9afc68a9b != \c\6\b\a\b\9\6\3\6\0\2\6\6\5\0\0\2\2\9\e\3\8\b\9\a\f\c\6\8\a\9\b ]] 00:27:58.686 15:32:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:58.686 15:32:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:58.686 15:32:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:58.686 15:32:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:58.686 15:32:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:58.686 15:32:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:58.686 15:32:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:58.686 15:32:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:58.686 15:32:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:58.686 [2024-10-25 15:32:41.168196] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:27:58.686 [2024-10-25 15:32:41.168525] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81433 ] 00:27:58.686 [2024-10-25 15:32:41.350236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.945 [2024-10-25 15:32:41.465993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.854  [2024-10-25T15:32:43.582Z] Copying: 708/1024 [MB] (708 MBps) [2024-10-25T15:32:44.958Z] Copying: 1024/1024 [MB] (average 699 MBps) 00:28:02.230 00:28:02.230 15:32:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:02.230 15:32:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2fbea36c5e46d98965d65389209f432e 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2fbea36c5e46d98965d65389209f432e != \2\f\b\e\a\3\6\c\5\e\4\6\d\9\8\9\6\5\d\6\5\3\8\9\2\0\9\f\4\3\2\e ]] 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81326 ]] 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81326 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 81326 ']' 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 81326 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81326 00:28:04.134 killing process with pid 81326 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81326' 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 81326 00:28:04.134 15:32:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 81326 00:28:05.071 [2024-10-25 15:32:47.791120] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:05.331 [2024-10-25 15:32:47.811591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.331 [2024-10-25 15:32:47.811639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:05.331 [2024-10-25 15:32:47.811654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:05.331 [2024-10-25 15:32:47.811680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.331 [2024-10-25 15:32:47.811702] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:05.331 [2024-10-25 15:32:47.815697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.331 [2024-10-25 15:32:47.815733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:05.331 [2024-10-25 15:32:47.815750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.984 ms 00:28:05.331 [2024-10-25 15:32:47.815760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.331 [2024-10-25 15:32:47.815973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.331 [2024-10-25 15:32:47.815986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:05.331 [2024-10-25 15:32:47.815997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.173 ms 00:28:05.331 [2024-10-25 15:32:47.816007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.331 [2024-10-25 15:32:47.817126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.331 [2024-10-25 15:32:47.817160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:05.331 [2024-10-25 15:32:47.817172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.105 ms 00:28:05.331 [2024-10-25 15:32:47.817209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.331 [2024-10-25 15:32:47.818131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.331 [2024-10-25 15:32:47.818161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:05.331 [2024-10-25 15:32:47.818174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.874 ms 00:28:05.331 [2024-10-25 15:32:47.818199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.331 [2024-10-25 15:32:47.832742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.331 [2024-10-25 15:32:47.832784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:05.331 [2024-10-25 15:32:47.832797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.531 ms 00:28:05.331 [2024-10-25 15:32:47.832812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.331 [2024-10-25 15:32:47.840619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.331 [2024-10-25 15:32:47.840660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:05.331 [2024-10-25 15:32:47.840689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.765 ms 00:28:05.331 [2024-10-25 15:32:47.840699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.331 [2024-10-25 15:32:47.840775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.331 [2024-10-25 15:32:47.840788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:05.331 [2024-10-25 15:32:47.840799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:28:05.331 [2024-10-25 15:32:47.840808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.331 [2024-10-25 15:32:47.855427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.331 [2024-10-25 15:32:47.855466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:05.331 [2024-10-25 15:32:47.855478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.618 ms 00:28:05.331 [2024-10-25 15:32:47.855487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.331 [2024-10-25 15:32:47.870437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.331 [2024-10-25 15:32:47.870475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:05.331 [2024-10-25 15:32:47.870487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.920 ms 00:28:05.331 [2024-10-25 15:32:47.870496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.331 [2024-10-25 15:32:47.884669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.331 [2024-10-25 15:32:47.884708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:05.331 [2024-10-25 15:32:47.884720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.145 ms 00:28:05.331 [2024-10-25 15:32:47.884730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.331 [2024-10-25 15:32:47.899009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.331 [2024-10-25 15:32:47.899046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:05.331 [2024-10-25 15:32:47.899070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.214 ms 00:28:05.331 [2024-10-25 15:32:47.899081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.331 [2024-10-25 15:32:47.899136] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:05.331 [2024-10-25 15:32:47.899155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:05.331 [2024-10-25 15:32:47.899171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:05.331 [2024-10-25 15:32:47.899208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:05.331 [2024-10-25 15:32:47.899224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:05.331 [2024-10-25 15:32:47.899234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:05.331 [2024-10-25 15:32:47.899245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:05.331 [2024-10-25 15:32:47.899255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:05.331 [2024-10-25 15:32:47.899266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:05.331 [2024-10-25 15:32:47.899276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:05.331 [2024-10-25 15:32:47.899286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:05.331 [2024-10-25 15:32:47.899297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:05.331 [2024-10-25 15:32:47.899307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:05.331 [2024-10-25 15:32:47.899317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:05.331 [2024-10-25 15:32:47.899327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:05.331 [2024-10-25 15:32:47.899338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:05.331 [2024-10-25 15:32:47.899348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:05.331 [2024-10-25 15:32:47.899358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:05.331 [2024-10-25 15:32:47.899368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:05.331 [2024-10-25 15:32:47.899380] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:05.331 [2024-10-25 15:32:47.899390] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: ca70ed1e-89de-4b94-accc-9539f64b4874 00:28:05.331 [2024-10-25 15:32:47.899401] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:05.331 [2024-10-25 15:32:47.899410] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:05.331 [2024-10-25 15:32:47.899419] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:05.331 [2024-10-25 15:32:47.899429] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:05.331 [2024-10-25 15:32:47.899439] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:05.331 [2024-10-25 15:32:47.899449] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:05.331 [2024-10-25 15:32:47.899458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:05.331 [2024-10-25 15:32:47.899467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:05.331 [2024-10-25 15:32:47.899476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:05.331 [2024-10-25 15:32:47.899487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.331 [2024-10-25 15:32:47.899503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:05.331 [2024-10-25 15:32:47.899513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.352 ms 00:28:05.331 [2024-10-25 15:32:47.899523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.331 [2024-10-25 15:32:47.919534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.331 [2024-10-25 15:32:47.919573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:05.331 [2024-10-25 15:32:47.919586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.010 ms 00:28:05.331 [2024-10-25 15:32:47.919596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.331 [2024-10-25 15:32:47.920143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.331 [2024-10-25 15:32:47.920160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:05.331 [2024-10-25 15:32:47.920170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.524 ms 00:28:05.331 [2024-10-25 15:32:47.920194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.331 [2024-10-25 15:32:47.985207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:05.332 [2024-10-25 15:32:47.985249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:05.332 [2024-10-25 15:32:47.985278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:05.332 [2024-10-25 15:32:47.985289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.332 [2024-10-25 15:32:47.985326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:05.332 [2024-10-25 15:32:47.985337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:05.332 [2024-10-25 15:32:47.985348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:05.332 [2024-10-25 15:32:47.985358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.332 [2024-10-25 15:32:47.985438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:05.332 [2024-10-25 15:32:47.985452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:05.332 [2024-10-25 15:32:47.985463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:05.332 [2024-10-25 15:32:47.985473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.332 [2024-10-25 15:32:47.985490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:05.332 [2024-10-25 15:32:47.985506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:05.332 [2024-10-25 15:32:47.985516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:05.332 [2024-10-25 15:32:47.985526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.591 [2024-10-25 15:32:48.110183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:05.591 [2024-10-25 15:32:48.110235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:05.591 [2024-10-25 15:32:48.110251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:05.591 [2024-10-25 15:32:48.110261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.591 [2024-10-25 15:32:48.211039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:05.591 [2024-10-25 15:32:48.211100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:05.591 [2024-10-25 15:32:48.211115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:05.591 [2024-10-25 15:32:48.211126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.591 [2024-10-25 15:32:48.211264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:05.591 [2024-10-25 15:32:48.211280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:05.591 [2024-10-25 15:32:48.211291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:05.591 [2024-10-25 15:32:48.211302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.591 [2024-10-25 15:32:48.211358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:05.591 [2024-10-25 15:32:48.211370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:05.591 [2024-10-25 15:32:48.211387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:05.591 [2024-10-25 15:32:48.211407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.591 [2024-10-25 15:32:48.211526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:05.591 [2024-10-25 15:32:48.211539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:05.591 [2024-10-25 15:32:48.211550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:05.591 [2024-10-25 15:32:48.211560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.591 [2024-10-25 15:32:48.211596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:05.591 [2024-10-25 15:32:48.211608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:05.591 [2024-10-25 15:32:48.211619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:05.591 [2024-10-25 15:32:48.211633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.591 [2024-10-25 15:32:48.211669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:05.591 [2024-10-25 15:32:48.211681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:05.591 [2024-10-25 15:32:48.211692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:05.591 [2024-10-25 15:32:48.211702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.592 [2024-10-25 15:32:48.211745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:05.592 [2024-10-25 15:32:48.211757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:05.592 [2024-10-25 15:32:48.211771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:05.592 [2024-10-25 15:32:48.211780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.592 [2024-10-25 15:32:48.211895] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 400.921 ms, result 0 00:28:06.990 15:32:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:06.990 15:32:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:06.990 15:32:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:06.990 15:32:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:06.990 15:32:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:06.990 15:32:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:06.990 15:32:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:06.990 Remove shared memory files 00:28:06.990 15:32:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:06.990 15:32:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:06.990 15:32:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:06.990 15:32:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81096 00:28:06.990 15:32:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:06.990 15:32:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:06.990 00:28:06.990 real 1m28.850s 00:28:06.990 user 2m1.118s 00:28:06.990 sys 0m21.270s 00:28:06.990 15:32:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:06.990 15:32:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:06.990 ************************************ 00:28:06.990 END TEST ftl_upgrade_shutdown 00:28:06.990 ************************************ 00:28:06.990 15:32:49 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:28:06.990 15:32:49 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:06.990 15:32:49 ftl -- ftl/ftl.sh@14 -- # killprocess 73939 00:28:06.990 15:32:49 ftl -- common/autotest_common.sh@950 -- # '[' -z 73939 ']' 00:28:06.990 15:32:49 ftl -- common/autotest_common.sh@954 -- # kill -0 73939 00:28:06.990 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (73939) - No such process 00:28:06.990 Process with pid 73939 is not found 00:28:06.990 15:32:49 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 73939 is not found' 00:28:06.990 15:32:49 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:28:06.990 15:32:49 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81558 00:28:06.990 15:32:49 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:06.990 15:32:49 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81558 00:28:06.990 15:32:49 ftl -- common/autotest_common.sh@831 -- # '[' -z 81558 ']' 00:28:06.990 15:32:49 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.990 15:32:49 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:06.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.990 15:32:49 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.990 15:32:49 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:06.990 15:32:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:06.990 [2024-10-25 15:32:49.632824] Starting SPDK v25.01-pre git sha1 183001ebc / DPDK 24.03.0 initialization... 00:28:06.990 [2024-10-25 15:32:49.632958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81558 ] 00:28:07.249 [2024-10-25 15:32:49.815600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.249 [2024-10-25 15:32:49.922881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.215 15:32:50 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:08.215 15:32:50 ftl -- common/autotest_common.sh@864 -- # return 0 00:28:08.215 15:32:50 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:08.473 nvme0n1 00:28:08.473 15:32:51 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:28:08.473 15:32:51 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:08.473 15:32:51 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:08.733 15:32:51 ftl -- ftl/common.sh@28 -- # stores=1bfeaa74-09b1-4415-9545-0a9d8f5194b5 00:28:08.733 15:32:51 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:28:08.733 15:32:51 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1bfeaa74-09b1-4415-9545-0a9d8f5194b5 00:28:08.733 15:32:51 ftl -- ftl/ftl.sh@23 -- # killprocess 81558 00:28:08.733 15:32:51 ftl -- common/autotest_common.sh@950 -- # '[' -z 81558 ']' 00:28:08.733 15:32:51 ftl -- common/autotest_common.sh@954 -- # kill -0 81558 00:28:08.733 15:32:51 ftl -- common/autotest_common.sh@955 -- # uname 00:28:08.733 15:32:51 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:08.733 15:32:51 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81558 00:28:08.992 15:32:51 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:08.992 killing process with pid 81558 00:28:08.992 15:32:51 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:08.992 15:32:51 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81558' 00:28:08.992 15:32:51 ftl -- common/autotest_common.sh@969 -- # kill 81558 00:28:08.992 15:32:51 ftl -- common/autotest_common.sh@974 -- # wait 81558 00:28:11.524 15:32:53 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:11.524 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:11.524 Waiting for block devices as requested 00:28:11.782 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:11.782 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:11.782 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:12.040 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:17.306 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:17.306 Remove shared memory files 00:28:17.306 15:32:59 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:28:17.306 15:32:59 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:17.306 15:32:59 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:28:17.306 15:32:59 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:28:17.306 15:32:59 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:28:17.306 15:32:59 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:17.306 15:32:59 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:28:17.306 00:28:17.306 real 11m11.684s 00:28:17.306 user 13m46.874s 00:28:17.306 sys 1m27.737s 00:28:17.306 15:32:59 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:17.306 15:32:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:17.306 ************************************ 00:28:17.306 END TEST ftl 00:28:17.306 ************************************ 00:28:17.306 15:32:59 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:28:17.306 15:32:59 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:17.306 15:32:59 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:28:17.306 15:32:59 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:28:17.306 15:32:59 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:28:17.306 15:32:59 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:28:17.306 15:32:59 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:17.306 15:32:59 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:28:17.306 15:32:59 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:28:17.306 15:32:59 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:28:17.306 15:32:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:17.306 15:32:59 -- common/autotest_common.sh@10 -- # set +x 00:28:17.306 15:32:59 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:28:17.306 15:32:59 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:17.306 15:32:59 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:17.306 15:32:59 -- common/autotest_common.sh@10 -- # set +x 00:28:19.266 INFO: APP EXITING 00:28:19.266 INFO: killing all VMs 00:28:19.266 INFO: killing vhost app 00:28:19.266 INFO: EXIT DONE 00:28:19.524 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:20.089 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:20.089 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:20.089 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:20.089 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:20.656 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:20.915 Cleaning 00:28:20.915 Removing: /var/run/dpdk/spdk0/config 00:28:20.915 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:20.915 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:20.915 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:20.915 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:20.915 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:20.915 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:20.915 Removing: /var/run/dpdk/spdk0 00:28:20.915 Removing: /var/run/dpdk/spdk_pid57484 00:28:20.915 Removing: /var/run/dpdk/spdk_pid57727 00:28:20.915 Removing: /var/run/dpdk/spdk_pid57956 00:28:20.915 Removing: /var/run/dpdk/spdk_pid58060 00:28:20.915 Removing: /var/run/dpdk/spdk_pid58116 00:28:20.915 Removing: /var/run/dpdk/spdk_pid58250 00:28:20.915 Removing: /var/run/dpdk/spdk_pid58273 00:28:20.915 Removing: /var/run/dpdk/spdk_pid58483 00:28:20.915 Removing: /var/run/dpdk/spdk_pid58595 00:28:21.173 Removing: /var/run/dpdk/spdk_pid58702 00:28:21.173 Removing: /var/run/dpdk/spdk_pid58824 00:28:21.173 Removing: /var/run/dpdk/spdk_pid58932 00:28:21.173 Removing: /var/run/dpdk/spdk_pid58971 00:28:21.173 Removing: /var/run/dpdk/spdk_pid59008 00:28:21.173 Removing: /var/run/dpdk/spdk_pid59084 00:28:21.173 Removing: /var/run/dpdk/spdk_pid59212 00:28:21.173 Removing: /var/run/dpdk/spdk_pid59665 00:28:21.173 Removing: /var/run/dpdk/spdk_pid59747 00:28:21.173 Removing: /var/run/dpdk/spdk_pid59821 00:28:21.173 Removing: /var/run/dpdk/spdk_pid59844 00:28:21.173 Removing: /var/run/dpdk/spdk_pid59994 00:28:21.173 Removing: /var/run/dpdk/spdk_pid60010 00:28:21.173 Removing: /var/run/dpdk/spdk_pid60171 00:28:21.173 Removing: /var/run/dpdk/spdk_pid60187 00:28:21.173 Removing: /var/run/dpdk/spdk_pid60257 00:28:21.173 Removing: /var/run/dpdk/spdk_pid60279 00:28:21.173 Removing: /var/run/dpdk/spdk_pid60344 00:28:21.173 Removing: /var/run/dpdk/spdk_pid60362 00:28:21.173 Removing: /var/run/dpdk/spdk_pid60563 00:28:21.173 Removing: /var/run/dpdk/spdk_pid60605 00:28:21.173 Removing: /var/run/dpdk/spdk_pid60694 00:28:21.173 Removing: /var/run/dpdk/spdk_pid60877 00:28:21.173 Removing: /var/run/dpdk/spdk_pid60977 00:28:21.173 Removing: /var/run/dpdk/spdk_pid61020 00:28:21.173 Removing: /var/run/dpdk/spdk_pid61484 00:28:21.173 Removing: /var/run/dpdk/spdk_pid61583 00:28:21.173 Removing: /var/run/dpdk/spdk_pid61698 00:28:21.173 Removing: /var/run/dpdk/spdk_pid61756 00:28:21.173 Removing: /var/run/dpdk/spdk_pid61782 00:28:21.173 Removing: /var/run/dpdk/spdk_pid61866 00:28:21.173 Removing: /var/run/dpdk/spdk_pid62515 00:28:21.173 Removing: /var/run/dpdk/spdk_pid62563 00:28:21.173 Removing: /var/run/dpdk/spdk_pid63069 00:28:21.174 Removing: /var/run/dpdk/spdk_pid63167 00:28:21.174 Removing: /var/run/dpdk/spdk_pid63283 00:28:21.174 Removing: /var/run/dpdk/spdk_pid63340 00:28:21.174 Removing: /var/run/dpdk/spdk_pid63366 00:28:21.174 Removing: /var/run/dpdk/spdk_pid63397 00:28:21.174 Removing: /var/run/dpdk/spdk_pid65297 00:28:21.174 Removing: /var/run/dpdk/spdk_pid65440 00:28:21.174 Removing: /var/run/dpdk/spdk_pid65449 00:28:21.174 Removing: /var/run/dpdk/spdk_pid65467 00:28:21.174 Removing: /var/run/dpdk/spdk_pid65507 00:28:21.174 Removing: /var/run/dpdk/spdk_pid65511 00:28:21.174 Removing: /var/run/dpdk/spdk_pid65523 00:28:21.174 Removing: /var/run/dpdk/spdk_pid65573 00:28:21.174 Removing: /var/run/dpdk/spdk_pid65577 00:28:21.174 Removing: /var/run/dpdk/spdk_pid65589 00:28:21.174 Removing: /var/run/dpdk/spdk_pid65635 00:28:21.174 Removing: /var/run/dpdk/spdk_pid65640 00:28:21.174 Removing: /var/run/dpdk/spdk_pid65656 00:28:21.174 Removing: /var/run/dpdk/spdk_pid67054 00:28:21.174 Removing: /var/run/dpdk/spdk_pid67163 00:28:21.174 Removing: /var/run/dpdk/spdk_pid68600 00:28:21.174 Removing: /var/run/dpdk/spdk_pid69964 00:28:21.174 Removing: /var/run/dpdk/spdk_pid70084 00:28:21.174 Removing: /var/run/dpdk/spdk_pid70207 00:28:21.174 Removing: /var/run/dpdk/spdk_pid70320 00:28:21.174 Removing: /var/run/dpdk/spdk_pid70451 00:28:21.432 Removing: /var/run/dpdk/spdk_pid70531 00:28:21.432 Removing: /var/run/dpdk/spdk_pid70684 00:28:21.432 Removing: /var/run/dpdk/spdk_pid71067 00:28:21.432 Removing: /var/run/dpdk/spdk_pid71109 00:28:21.432 Removing: /var/run/dpdk/spdk_pid71574 00:28:21.432 Removing: /var/run/dpdk/spdk_pid71772 00:28:21.432 Removing: /var/run/dpdk/spdk_pid71875 00:28:21.432 Removing: /var/run/dpdk/spdk_pid71989 00:28:21.432 Removing: /var/run/dpdk/spdk_pid72050 00:28:21.432 Removing: /var/run/dpdk/spdk_pid72076 00:28:21.432 Removing: /var/run/dpdk/spdk_pid72378 00:28:21.432 Removing: /var/run/dpdk/spdk_pid72449 00:28:21.432 Removing: /var/run/dpdk/spdk_pid72540 00:28:21.432 Removing: /var/run/dpdk/spdk_pid72979 00:28:21.432 Removing: /var/run/dpdk/spdk_pid73126 00:28:21.432 Removing: /var/run/dpdk/spdk_pid73939 00:28:21.432 Removing: /var/run/dpdk/spdk_pid74088 00:28:21.432 Removing: /var/run/dpdk/spdk_pid74292 00:28:21.432 Removing: /var/run/dpdk/spdk_pid74400 00:28:21.432 Removing: /var/run/dpdk/spdk_pid74736 00:28:21.432 Removing: /var/run/dpdk/spdk_pid75001 00:28:21.432 Removing: /var/run/dpdk/spdk_pid75366 00:28:21.432 Removing: /var/run/dpdk/spdk_pid75567 00:28:21.432 Removing: /var/run/dpdk/spdk_pid75697 00:28:21.432 Removing: /var/run/dpdk/spdk_pid75769 00:28:21.432 Removing: /var/run/dpdk/spdk_pid75902 00:28:21.432 Removing: /var/run/dpdk/spdk_pid75933 00:28:21.432 Removing: /var/run/dpdk/spdk_pid76002 00:28:21.432 Removing: /var/run/dpdk/spdk_pid76200 00:28:21.432 Removing: /var/run/dpdk/spdk_pid76447 00:28:21.432 Removing: /var/run/dpdk/spdk_pid76859 00:28:21.432 Removing: /var/run/dpdk/spdk_pid77295 00:28:21.432 Removing: /var/run/dpdk/spdk_pid77730 00:28:21.432 Removing: /var/run/dpdk/spdk_pid78245 00:28:21.432 Removing: /var/run/dpdk/spdk_pid78387 00:28:21.432 Removing: /var/run/dpdk/spdk_pid78485 00:28:21.432 Removing: /var/run/dpdk/spdk_pid79093 00:28:21.432 Removing: /var/run/dpdk/spdk_pid79163 00:28:21.432 Removing: /var/run/dpdk/spdk_pid79609 00:28:21.432 Removing: /var/run/dpdk/spdk_pid79987 00:28:21.432 Removing: /var/run/dpdk/spdk_pid80518 00:28:21.432 Removing: /var/run/dpdk/spdk_pid80668 00:28:21.432 Removing: /var/run/dpdk/spdk_pid80721 00:28:21.432 Removing: /var/run/dpdk/spdk_pid80785 00:28:21.432 Removing: /var/run/dpdk/spdk_pid80835 00:28:21.432 Removing: /var/run/dpdk/spdk_pid80901 00:28:21.432 Removing: /var/run/dpdk/spdk_pid81096 00:28:21.432 Removing: /var/run/dpdk/spdk_pid81180 00:28:21.432 Removing: /var/run/dpdk/spdk_pid81237 00:28:21.432 Removing: /var/run/dpdk/spdk_pid81326 00:28:21.432 Removing: /var/run/dpdk/spdk_pid81361 00:28:21.432 Removing: /var/run/dpdk/spdk_pid81433 00:28:21.432 Removing: /var/run/dpdk/spdk_pid81558 00:28:21.432 Clean 00:28:21.691 15:33:04 -- common/autotest_common.sh@1449 -- # return 0 00:28:21.691 15:33:04 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:28:21.691 15:33:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:21.691 15:33:04 -- common/autotest_common.sh@10 -- # set +x 00:28:21.692 15:33:04 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:28:21.692 15:33:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:21.692 15:33:04 -- common/autotest_common.sh@10 -- # set +x 00:28:21.692 15:33:04 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:21.692 15:33:04 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:21.692 15:33:04 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:21.692 15:33:04 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:28:21.692 15:33:04 -- spdk/autotest.sh@394 -- # hostname 00:28:21.692 15:33:04 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:21.950 geninfo: WARNING: invalid characters removed from testname! 00:28:48.494 15:33:28 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:49.060 15:33:31 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:51.593 15:33:33 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:53.496 15:33:35 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:55.442 15:33:38 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:57.995 15:33:40 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:59.897 15:33:42 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:59.897 15:33:42 -- common/autotest_common.sh@1688 -- $ [[ y == y ]] 00:28:59.897 15:33:42 -- common/autotest_common.sh@1689 -- $ lcov --version 00:28:59.897 15:33:42 -- common/autotest_common.sh@1689 -- $ awk '{print $NF}' 00:28:59.897 15:33:42 -- common/autotest_common.sh@1689 -- $ lt 1.15 2 00:28:59.897 15:33:42 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:28:59.897 15:33:42 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:28:59.897 15:33:42 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:28:59.897 15:33:42 -- scripts/common.sh@336 -- $ IFS=.-: 00:28:59.897 15:33:42 -- scripts/common.sh@336 -- $ read -ra ver1 00:28:59.897 15:33:42 -- scripts/common.sh@337 -- $ IFS=.-: 00:28:59.897 15:33:42 -- scripts/common.sh@337 -- $ read -ra ver2 00:28:59.897 15:33:42 -- scripts/common.sh@338 -- $ local 'op=<' 00:28:59.897 15:33:42 -- scripts/common.sh@340 -- $ ver1_l=2 00:28:59.897 15:33:42 -- scripts/common.sh@341 -- $ ver2_l=1 00:28:59.897 15:33:42 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:28:59.897 15:33:42 -- scripts/common.sh@344 -- $ case "$op" in 00:28:59.897 15:33:42 -- scripts/common.sh@345 -- $ : 1 00:28:59.897 15:33:42 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:28:59.897 15:33:42 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.897 15:33:42 -- scripts/common.sh@365 -- $ decimal 1 00:28:59.897 15:33:42 -- scripts/common.sh@353 -- $ local d=1 00:28:59.897 15:33:42 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:28:59.897 15:33:42 -- scripts/common.sh@355 -- $ echo 1 00:28:59.897 15:33:42 -- scripts/common.sh@365 -- $ ver1[v]=1 00:28:59.897 15:33:42 -- scripts/common.sh@366 -- $ decimal 2 00:28:59.897 15:33:42 -- scripts/common.sh@353 -- $ local d=2 00:28:59.897 15:33:42 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:28:59.897 15:33:42 -- scripts/common.sh@355 -- $ echo 2 00:28:59.897 15:33:42 -- scripts/common.sh@366 -- $ ver2[v]=2 00:28:59.897 15:33:42 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:28:59.897 15:33:42 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:28:59.897 15:33:42 -- scripts/common.sh@368 -- $ return 0 00:28:59.897 15:33:42 -- common/autotest_common.sh@1690 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.897 15:33:42 -- common/autotest_common.sh@1702 -- $ export 'LCOV_OPTS= 00:28:59.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.897 --rc genhtml_branch_coverage=1 00:28:59.897 --rc genhtml_function_coverage=1 00:28:59.897 --rc genhtml_legend=1 00:28:59.897 --rc geninfo_all_blocks=1 00:28:59.897 --rc geninfo_unexecuted_blocks=1 00:28:59.897 00:28:59.897 ' 00:28:59.897 15:33:42 -- common/autotest_common.sh@1702 -- $ LCOV_OPTS=' 00:28:59.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.897 --rc genhtml_branch_coverage=1 00:28:59.897 --rc genhtml_function_coverage=1 00:28:59.897 --rc genhtml_legend=1 00:28:59.897 --rc geninfo_all_blocks=1 00:28:59.897 --rc geninfo_unexecuted_blocks=1 00:28:59.897 00:28:59.898 ' 00:28:59.898 15:33:42 -- common/autotest_common.sh@1703 -- $ export 'LCOV=lcov 00:28:59.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.898 --rc genhtml_branch_coverage=1 00:28:59.898 --rc genhtml_function_coverage=1 00:28:59.898 --rc genhtml_legend=1 00:28:59.898 --rc geninfo_all_blocks=1 00:28:59.898 --rc geninfo_unexecuted_blocks=1 00:28:59.898 00:28:59.898 ' 00:28:59.898 15:33:42 -- common/autotest_common.sh@1703 -- $ LCOV='lcov 00:28:59.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.898 --rc genhtml_branch_coverage=1 00:28:59.898 --rc genhtml_function_coverage=1 00:28:59.898 --rc genhtml_legend=1 00:28:59.898 --rc geninfo_all_blocks=1 00:28:59.898 --rc geninfo_unexecuted_blocks=1 00:28:59.898 00:28:59.898 ' 00:28:59.898 15:33:42 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:59.898 15:33:42 -- scripts/common.sh@15 -- $ shopt -s extglob 00:28:59.898 15:33:42 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:59.898 15:33:42 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.898 15:33:42 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.898 15:33:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.898 15:33:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.898 15:33:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.898 15:33:42 -- paths/export.sh@5 -- $ export PATH 00:28:59.898 15:33:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.898 15:33:42 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:59.898 15:33:42 -- common/autobuild_common.sh@486 -- $ date +%s 00:28:59.898 15:33:42 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729870422.XXXXXX 00:28:59.898 15:33:42 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729870422.ZMoNFe 00:28:59.898 15:33:42 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:28:59.898 15:33:42 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:28:59.898 15:33:42 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:59.898 15:33:42 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:59.898 15:33:42 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:59.898 15:33:42 -- common/autobuild_common.sh@502 -- $ get_config_params 00:28:59.898 15:33:42 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:28:59.898 15:33:42 -- common/autotest_common.sh@10 -- $ set +x 00:28:59.898 15:33:42 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:28:59.898 15:33:42 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:28:59.898 15:33:42 -- pm/common@17 -- $ local monitor 00:28:59.898 15:33:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:59.898 15:33:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:59.898 15:33:42 -- pm/common@25 -- $ sleep 1 00:28:59.898 15:33:42 -- pm/common@21 -- $ date +%s 00:28:59.898 15:33:42 -- pm/common@21 -- $ date +%s 00:28:59.898 15:33:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1729870422 00:28:59.898 15:33:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1729870422 00:28:59.898 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1729870422_collect-cpu-load.pm.log 00:28:59.898 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1729870422_collect-vmstat.pm.log 00:29:00.834 15:33:43 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:29:00.834 15:33:43 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:29:00.834 15:33:43 -- spdk/autopackage.sh@14 -- $ timing_finish 00:29:00.834 15:33:43 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:00.834 15:33:43 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:00.834 15:33:43 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:00.834 15:33:43 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:00.834 15:33:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:00.834 15:33:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:00.834 15:33:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:00.834 15:33:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:29:00.834 15:33:43 -- pm/common@44 -- $ pid=83291 00:29:00.834 15:33:43 -- pm/common@50 -- $ kill -TERM 83291 00:29:00.834 15:33:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:00.834 15:33:43 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:00.834 15:33:43 -- pm/common@44 -- $ pid=83292 00:29:00.834 15:33:43 -- pm/common@50 -- $ kill -TERM 83292 00:29:00.834 + [[ -n 5249 ]] 00:29:00.834 + sudo kill 5249 00:29:01.102 [Pipeline] } 00:29:01.117 [Pipeline] // timeout 00:29:01.122 [Pipeline] } 00:29:01.137 [Pipeline] // stage 00:29:01.142 [Pipeline] } 00:29:01.156 [Pipeline] // catchError 00:29:01.165 [Pipeline] stage 00:29:01.167 [Pipeline] { (Stop VM) 00:29:01.180 [Pipeline] sh 00:29:01.461 + vagrant halt 00:29:03.996 ==> default: Halting domain... 00:29:10.570 [Pipeline] sh 00:29:10.851 + vagrant destroy -f 00:29:13.385 ==> default: Removing domain... 00:29:14.332 [Pipeline] sh 00:29:14.612 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:29:14.621 [Pipeline] } 00:29:14.635 [Pipeline] // stage 00:29:14.639 [Pipeline] } 00:29:14.652 [Pipeline] // dir 00:29:14.657 [Pipeline] } 00:29:14.670 [Pipeline] // wrap 00:29:14.675 [Pipeline] } 00:29:14.686 [Pipeline] // catchError 00:29:14.693 [Pipeline] stage 00:29:14.695 [Pipeline] { (Epilogue) 00:29:14.708 [Pipeline] sh 00:29:14.989 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:20.264 [Pipeline] catchError 00:29:20.266 [Pipeline] { 00:29:20.278 [Pipeline] sh 00:29:20.556 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:20.556 Artifacts sizes are good 00:29:20.822 [Pipeline] } 00:29:20.836 [Pipeline] // catchError 00:29:20.846 [Pipeline] archiveArtifacts 00:29:20.853 Archiving artifacts 00:29:20.977 [Pipeline] cleanWs 00:29:20.988 [WS-CLEANUP] Deleting project workspace... 00:29:20.988 [WS-CLEANUP] Deferred wipeout is used... 00:29:20.994 [WS-CLEANUP] done 00:29:20.995 [Pipeline] } 00:29:21.010 [Pipeline] // stage 00:29:21.015 [Pipeline] } 00:29:21.028 [Pipeline] // node 00:29:21.034 [Pipeline] End of Pipeline 00:29:21.069 Finished: SUCCESS